Jesus Sanchez-Palencia [Tue, 3 Jul 2018 22:42:54 +0000 (15:42 -0700)]
net/sched: Add HW offloading capability to ETF
Add infra so etf qdisc supports HW offload of time-based transmission.
For hw offload, the time sorted list is still used, so packets are
dequeued always in order of txtime.
Example:
$ tc qdisc replace dev enp2s0 parent root handle 100 mqprio num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@1 2@2 hw 0
$ tc qdisc add dev enp2s0 parent 100:1 etf offload delta 100000 \
clockid CLOCK_REALTIME
In this example, the Qdisc will use HW offload for the control of the
transmission time through the network adapter. The hrtimer used for
packets scheduling inside the qdisc will use the clockid CLOCK_REALTIME
as reference and packets leave the Qdisc "delta" (100000) nanoseconds
before their transmission time. Because this will be using HW offload and
since dynamic clocks are not supported by the hrtimer, the system clock
and the PHC clock must be synchronized for this mode to behave as
expected.
Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vinicius Costa Gomes [Tue, 3 Jul 2018 22:42:53 +0000 (15:42 -0700)]
net/sched: Introduce the ETF Qdisc
The ETF (Earliest TxTime First) qdisc uses the information added
earlier in this series (the socket option SO_TXTIME and the new
role of sk_buff->tstamp) to schedule packets transmission based
on absolute time.
For some workloads, just bandwidth enforcement is not enough, and
precise control of the transmission of packets is necessary.
Example:
$ tc qdisc replace dev enp2s0 parent root handle 100 mqprio num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@1 2@2 hw 0
$ tc qdisc add dev enp2s0 parent 100:1 etf delta 100000 \
clockid CLOCK_TAI
In this example, the Qdisc will provide SW best-effort for the control
of the transmission time to the network adapter, the time stamp in the
socket will be in reference to the clockid CLOCK_TAI and packets
will leave the qdisc "delta" (100000) nanoseconds before its transmission
time.
The ETF qdisc will buffer packets sorted by their txtime. It will drop
packets on enqueue() if their skbuff clockid does not match the clock
reference of the Qdisc. Moreover, on dequeue(), a packet will be dropped
if it expires while being enqueued.
The qdisc also supports the SO_TXTIME deadline mode. For this mode, it
will dequeue a packet as soon as possible and change the skb timestamp
to 'now' during etf_dequeue().
Note that both the qdisc's and the SO_TXTIME ABIs allow for a clockid
to be configured, but it's been decided that usage of CLOCK_TAI should
be enforced until we decide to allow for other clockids to be used.
The rationale here is that PTP times are usually in the TAI scale, thus
no other clocks should be necessary. For now, the qdisc will return
EINVAL if any clocks other than CLOCK_TAI are used.
Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vinicius Costa Gomes [Tue, 3 Jul 2018 22:42:52 +0000 (15:42 -0700)]
net/sched: Allow creating a Qdisc watchdog with other clocks
This adds 'qdisc_watchdog_init_clockid()' that allows a clockid to be
passed, this allows other time references to be used when scheduling
the Qdisc to run.
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Richard Cochran [Tue, 3 Jul 2018 22:42:51 +0000 (15:42 -0700)]
net: packet: Hook into time based transmission.
For raw layer-2 packets, copy the desired future transmit time from
the CMSG cookie into the skb.
Signed-off-by: Richard Cochran <rcochran@linutronix.de>
Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesus Sanchez-Palencia [Tue, 3 Jul 2018 22:42:50 +0000 (15:42 -0700)]
net: ipv6: Hook into time based transmission
Add a struct sockcm_cookie parameter to ip6_setup_cork() so
we can easily re-use the transmit_time field from struct inet_cork
for most paths, by copying the timestamp from the CMSG cookie.
This is later copied into the skb during __ip6_make_skb().
For the raw fast path, also pass the sockcm_cookie as a parameter
so we can just perform the copy at rawv6_send_hdrinc() directly.
Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesus Sanchez-Palencia [Tue, 3 Jul 2018 22:42:49 +0000 (15:42 -0700)]
net: ipv4: Hook into time based transmission
Add a transmit_time field to struct inet_cork, then copy the
timestamp from the CMSG cookie at ip_setup_cork() so we can
safely copy it into the skb later during __ip_make_skb().
For the raw fast path, just perform the copy at raw_send_hdrinc().
Signed-off-by: Richard Cochran <rcochran@linutronix.de>
Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Richard Cochran [Tue, 3 Jul 2018 22:42:48 +0000 (15:42 -0700)]
net: Add a new socket option for a future transmit time.
This patch introduces SO_TXTIME. User space enables this option in
order to pass a desired future transmit time in a CMSG when calling
sendmsg(2). The argument to this socket option is a 8-bytes long struct
provided by the uapi header net_tstamp.h defined as:
struct sock_txtime {
clockid_t clockid;
u32 flags;
};
Note that new fields were added to struct sock by filling a 2-bytes
hole found in the struct. For that reason, neither the struct size or
number of cachelines were altered.
Signed-off-by: Richard Cochran <rcochran@linutronix.de>
Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesus Sanchez-Palencia [Tue, 3 Jul 2018 22:42:47 +0000 (15:42 -0700)]
net: Clear skb->tstamp only on the forwarding path
This is done in preparation for the upcoming time based transmission
patchset. Now that skb->tstamp will be used to hold packet's txtime,
we must ensure that it is being cleared when traversing namespaces.
Also, doing that from skb_scrub_packet() before the early return would
break our feature when tunnels are used.
Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Tue, 3 Jul 2018 21:17:31 +0000 (16:17 -0500)]
isdn: mark expected switch fall-throughs
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.
Warning level 2 was used: -Wimplicit-fallthrough=2
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Marcel Ziswiler [Tue, 3 Jul 2018 15:06:49 +0000 (17:06 +0200)]
net: usb: asix: allow optionally getting mac address from device tree
For Embedded use where e.g. AX88772B chips may be used without external
EEPROMs the boot loader may choose to pass the MAC address to be used
via device tree. Therefore, allow for optionally getting the MAC
address from device tree data e.g. as follows (excerpt from a T30 based
board, local-mac-address to be filled in by boot loader):
/* EHCI instance 1: USB2_DP/N -> AX88772B */
usb@
7d004000 {
status = "okay";
#address-cells = <1>;
#size-cells = <0>;
asix@1 {
reg = <1>;
local-mac-address = [00 00 00 00 00 00];
};
};
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wei Yongjun [Tue, 3 Jul 2018 13:45:12 +0000 (13:45 +0000)]
net: sched: act_pedit: fix possible memory leak in tcf_pedit_init()
'keys_ex' is malloced by tcf_pedit_keys_ex_parse() in tcf_pedit_init()
but not all of the error handle path free it, this may cause memory
leak. This patch fix it.
Fixes: 71d0ed7079df ("net/act_pedit: Support using offset relative to the conventional network headers")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 4 Jul 2018 12:40:02 +0000 (21:40 +0900)]
Merge branch 'bridge-iproute2-isolated-port-and-selftests'
Nikolay Aleksandrov says:
====================
bridge: iproute2 isolated port and selftests
Add support to iproute2 for port isolation config and selftests for it.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 3 Jul 2018 12:42:44 +0000 (15:42 +0300)]
selftests: forwarding: test for bridge port isolation
This test checks if the bridge port isolation feature works as expected
by performing ping/ping6 tests between hosts that are isolated (should
not work) and between an isolated and non-isolated hosts (should work).
Same test is performed for flooding from and to isolated and
non-isolated ports.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Tue, 3 Jul 2018 12:42:43 +0000 (15:42 +0300)]
selftests: forwarding: lib: extract ping and ping6 so they can be reused
Extract ping and ping6 command execution so the return value can be
checked by the caller, this is needed for port isolation tests that are
intended to fail.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 4 Jul 2018 12:30:47 +0000 (21:30 +0900)]
Merge branch 'vhost_net-Avoid-vq-kicks-during-busyloop'
Toshiaki Makita says:
====================
vhost_net: Avoid vq kicks during busyloop
Under heavy load vhost tx busypoll tend not to suppress vq kicks, which
causes poor guest tx performance. The detailed scenario is described in
commitlog of patch 2.
Rx seems not to have that serious problem, but for consistency I made a
similar change on rx to avoid rx wakeups (patch 3).
Additionary patch 4 is to avoid rx kicks under heavy load during
busypoll.
Tx performance is greatly improved by this change. I don't see notable
performance change on rx with this series though.
Performance numbers (tx):
- Bulk transfer from guest to external physical server.
[Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]
- Set 10us busypoll.
- Guest disables checksum and TSO because of host XDP.
- Measured single flow Mbps by netperf, and kicks by perf kvm stat
(EPT_MISCONFIG event).
Before After
Mbps kicks/s Mbps kicks/s
UDP_STREAM 1472byte 247758 27
Send 3645.37 6958.10
Recv 3588.56 6958.10
1byte 9865 37
Send 4.34 5.43
Recv 4.17 5.26
TCP_STREAM 8801.03 45794 9592.77 2884
v2:
- Split patches into 3 parts (renaming variables, tx-kick fix, rx-wakeup
fix).
- Avoid rx-kicks too (patch 4).
- Don't memorize endtime as it is not needed for now.
====================
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toshiaki Makita [Tue, 3 Jul 2018 07:31:34 +0000 (16:31 +0900)]
vhost_net: Avoid rx vring kicks during busyloop
We may run out of avail rx ring descriptor under heavy load but busypoll
did not detect it so busypoll may have exited prematurely. Avoid this by
checking rx ring full during busypoll.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toshiaki Makita [Tue, 3 Jul 2018 07:31:33 +0000 (16:31 +0900)]
vhost_net: Avoid rx queue wake-ups during busypoll
We may run handle_rx() while rx work is queued. For example a packet can
push the rx work during the window before handle_rx calls
vhost_net_disable_vq().
In that case busypoll immediately exits due to vhost_has_work()
condition and enables vq again. This can lead to another unnecessary rx
wake-ups, so poll rx work instead of enabling the vq.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toshiaki Makita [Tue, 3 Jul 2018 07:31:32 +0000 (16:31 +0900)]
vhost_net: Avoid tx vring kicks during busyloop
Under heavy load vhost busypoll may run without suppressing
notification. For example tx zerocopy callback can push tx work while
handle_tx() is running, then busyloop exits due to vhost_has_work()
condition and enables notification but immediately reenters handle_tx()
because the pushed work was tx. In this case handle_tx() tries to
disable notification again, but when using event_idx it by design
cannot. Then busyloop will run without suppressing notification.
Another example is the case where handle_tx() tries to enable
notification but avail idx is advanced so disables it again. This case
also leads to the same situation with event_idx.
The problem is that once we enter this situation busyloop does not work
under heavy load for considerable amount of time, because notification
is likely to happen during busyloop and handle_tx() immediately enables
notification after notification happens. Specifically busyloop detects
notification by vhost_has_work() and then handle_tx() calls
vhost_enable_notify(). Because the detected work was the tx work, it
enters handle_tx(), and enters busyloop without suppression again.
This is likely to be repeated, so with event_idx we are almost not able
to suppress notification in this case.
To fix this, poll the work instead of enabling notification when
busypoll is interrupted by something. IMHO vhost_has_work() is kind of
interruption rather than a signal to completely cancel the busypoll, so
let's run busypoll after the necessary work is done.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Toshiaki Makita [Tue, 3 Jul 2018 07:31:31 +0000 (16:31 +0900)]
vhost_net: Rename local variables in vhost_net_rx_peek_head_len
So we can easily see which variable is for which, tx or rx.
Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Qiaobin Fu [Sun, 1 Jul 2018 19:16:27 +0000 (15:16 -0400)]
net:sched: add action inheritdsfield to skbedit
The new action inheritdsfield copies the field DS of
IPv4 and IPv6 packets into skb->priority. This enables
later classification of packets based on the DS field.
v5:
*Update the drop counter for TC_ACT_SHOT
v4:
*Not allow setting flags other than the expected ones.
*Allow dumping the pure flags.
v3:
*Use optional flags, so that it won't break old versions of tc.
*Allow users to set both SKBEDIT_F_PRIORITY and SKBEDIT_F_INHERITDSFIELD flags.
v2:
*Fix the style issue
*Move the code from skbmod to skbedit
Original idea by Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Qiaobin Fu <qiaobinf@bu.edu>
Reviewed-by: Michel Machado <michel@digirati.com.br>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Davide Caratti <dcaratti@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 4 Jul 2018 05:18:46 +0000 (14:18 +0900)]
Merge branch 'More-mirror-to-gretap-tests-with-bridge-in-UL'
Petr Machata says:
====================
More mirror-to-gretap tests with bridge in UL
This patchset adds two more tests where the mirror-to-gretap has a
bridge in underlay packet path, without a VLAN above or below that
bridge.
In patch #1, a non-VLAN-filtering bridge is tested.
In patch #2, a VLAN-filtering bridge is tested.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Mon, 2 Jul 2018 17:58:56 +0000 (19:58 +0200)]
selftests: forwarding: Test mirror-to-gretap w/ UL 802.1q
Test for "tc action mirred egress mirror" that mirrors to gretap when
the underlay route points at a VLAN-aware bridge (802.1q).
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Mon, 2 Jul 2018 17:58:49 +0000 (19:58 +0200)]
selftests: forwarding: Test mirror-to-gretap w/ UL 802.1d
Test for "tc action mirred egress mirror" that mirrors to gretap when
the underlay route points at a VLAN-unaware bridge (802.1d).
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 4 Jul 2018 05:06:20 +0000 (14:06 +0900)]
Merge branch 'Handle-multiple-received-packets-at-each-stage'
Edward Cree says:
====================
Handle multiple received packets at each stage
This patch series adds the capability for the network stack to receive a
list of packets and process them as a unit, rather than handling each
packet singly in sequence. This is done by factoring out the existing
datapath code at each layer and wrapping it in list handling code.
The motivation for this change is twofold:
* Instruction cache locality. Currently, running the entire network
stack receive path on a packet involves more code than will fit in the
lowest-level icache, meaning that when the next packet is handled, the
code has to be reloaded from more distant caches. By handling packets
in "row-major order", we ensure that the code at each layer is hot for
most of the list. (There is a corresponding downside in _data_ cache
locality, since we are now touching every packet at every layer, but in
practice there is easily enough room in dcache to hold one cacheline of
each of the 64 packets in a NAPI poll.)
* Reduction of indirect calls. Owing to Spectre mitigations, indirect
function calls are now more expensive than ever; they are also heavily
used in the network stack's architecture (see [1]). By replacing 64
indirect calls to the next-layer per-packet function with a single
indirect call to the next-layer list function, we can save CPU cycles.
Drivers pass an SKB list to the stack at the end of the NAPI poll; this
gives a natural batch size (the NAPI poll weight) and avoids waiting at
the software level for further packets to make a larger batch (which
would add latency). It also means that the batch size is automatically
tuned by the existing interrupt moderation mechanism.
The stack then runs each layer of processing over all the packets in the
list before proceeding to the next layer. Where the 'next layer' (or
the context in which it must run) differs among the packets, the stack
splits the list; this 'late demux' means that packets which differ only
in later headers (e.g. same L2/L3 but different L4) can traverse the
early part of the stack together.
Also, where the next layer is not (yet) list-aware, the stack can revert
to calling the rest of the stack in a loop; this allows gradual/creeping
listification, with no 'flag day' patch needed to listify everything.
Patches 1-2 simply place received packets on a list during the event
processing loop on the sfc EF10 architecture, then call the normal stack
for each packet singly at the end of the NAPI poll. (Analogues of patch
#2 for other NIC drivers should be fairly straightforward.)
Patches 3-9 extend the list processing as far as the IP receive handler.
Patches 1-2 alone give about a 10% improvement in packet rate in the
baseline test; adding patches 3-9 raises this to around 25%.
Performance measurements were made with NetPerf UDP_STREAM, using 1-byte
packets and a single core to handle interrupts on the RX side; this was
in order to measure as simply as possible the packet rate handled by a
single core. Figures are in Mbit/s; divide by 8 to obtain Mpps. The
setup was tuned for maximum reproducibility, rather than raw performance.
Full details and more results (both with and without retpolines) from a
previous version of the patch series are presented in [2].
The baseline test uses four streams, and multiple RXQs all bound to a
single CPU (the netperf binary is bound to a neighbouring CPU). These
tests were run with retpolines.
net-next: 6.91 Mb/s (datum)
after 9: 8.46 Mb/s (+22.5%)
Note however that these results are not robust; changes in the parameters
of the test sometimes shrink the gain to single-digit percentages. For
instance, when using only a single RXQ, only a 4% gain was seen.
One test variation was the use of software filtering/firewall rules.
Adding a single iptables rule (UDP port drop on a port range not matching
the test traffic), thus making the netfilter hook have work to do,
reduced baseline performance but showed a similar gain from the patches:
net-next: 5.02 Mb/s (datum)
after 9: 6.78 Mb/s (+35.1%)
Similarly, testing with a set of TC flower filters (kindly supplied by
Cong Wang) gave the following:
net-next: 6.83 Mb/s (datum)
after 9: 8.86 Mb/s (+29.7%)
These data suggest that the batching approach remains effective in the
presence of software switching rules, and perhaps even improves the
performance of those rules by allowing them and their codepaths to stay
in cache between packets.
Changes from v3:
* Fixed build error when CONFIG_NETFILTER=n (thanks kbuild).
Changes from v2:
* Used standard list handling (and skb->list) instead of the skb-queue
functions (that use skb->next, skb->prev).
- As part of this, changed from a "dequeue, process, enqueue" model to
using list_for_each_safe, list_del, and (new) list_cut_before.
* Altered __netif_receive_skb_core() changes in patch 6 as per Willem de
Bruijn's suggestions (separate **ppt_prev from *pt_prev; renaming).
* Removed patches to Generic XDP, since they were producing no benefit.
I may revisit them later.
* Removed RFC tags.
Changes from v1:
* Rebased across 2 years' net-next movement (surprisingly straightforward).
- Added Generic XDP handling to netif_receive_skb_list_internal()
- Dealt with changes to PFMEMALLOC setting APIs
* General cleanup of code and comments.
* Skipped function calls for empty lists at various points in the stack
(patch #9).
* Added listified Generic XDP handling (patches 10-12), though it doesn't
seem to help (see above).
* Extended testing to cover software firewalls / netfilter etc.
[1] http://vger.kernel.org/netconf2018_files/DavidMiller_netconf2018.pdf
[2] http://vger.kernel.org/netconf2018_files/EdwardCree_netconf2018.pdf
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:14:44 +0000 (16:14 +0100)]
net: don't bother calling list RX functions on empty lists
Generally the check should be very cheap, as the sk_buff_head is in cache.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:14:34 +0000 (16:14 +0100)]
net: ipv4: listify ip_rcv_finish
ip_rcv_finish_core(), if it does not drop, sets skb->dst by either early
demux or route lookup. The last step, calling dst_input(skb), is left to
the caller; in the listified case, we split to form sublists with a common
dst, but then ip_sublist_rcv_finish() just calls dst_input(skb) in a loop.
The next step in listification would thus be to add a list_input() method
to struct dst_entry.
Early demux is an indirect call based on iph->protocol; this is another
opportunity for listification which is not taken here (it would require
slicing up ip_rcv_finish_core() to allow splitting on protocol changes).
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:14:12 +0000 (16:14 +0100)]
net: ipv4: listified version of ip_rcv
Also involved adding a way to run a netfilter hook over a list of packets.
Rather than attempting to make netfilter know about lists (which would be
a major project in itself) we just let it call the regular okfn (in this
case ip_rcv_finish()) for any packets it steals, and have it give us back
a list of packets it's synchronously accepted (which normally NF_HOOK
would automatically call okfn() on, but we want to be able to potentially
pass the list to a listified version of okfn().)
The netfilter hooks themselves are indirect calls that still happen per-
packet (see nf_hook_entry_hookfn()), but again, changing that can be left
for future work.
There is potential for out-of-order receives if the netfilter hook ends up
synchronously stealing packets, as they will be processed before any
accepts earlier in the list. However, it was already possible for an
asynchronous accept to cause out-of-order receives, so presumably this is
considered OK.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:13:56 +0000 (16:13 +0100)]
net: core: propagate SKB lists through packet_type lookup
__netif_receive_skb_core() does a depressingly large amount of per-packet
work that can't easily be listified, because the another_round looping
makes it nontrivial to slice up into smaller functions.
Fortunately, most of that work disappears in the fast path:
* Hardware devices generally don't have an rx_handler
* Unless you're tcpdumping or something, there is usually only one ptype
* VLAN processing comes before the protocol ptype lookup, so doesn't force
a pt_prev deliver
so normally, __netif_receive_skb_core() will run straight through and pass
back the one ptype found in ptype_base[hash of skb->protocol].
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:13:40 +0000 (16:13 +0100)]
net: core: another layer of lists, around PF_MEMALLOC skb handling
First example of a layer splitting the list (rather than merely taking
individual packets off it).
Involves new list.h function, list_cut_before(), like list_cut_position()
but cuts on the other side of the given entry.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:13:24 +0000 (16:13 +0100)]
net: core: Another step of skb receive list processing
netif_receive_skb_list_internal() now processes a list and hands it
on to the next function.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:13:11 +0000 (16:13 +0100)]
net: core: unwrap skb list receive slightly further
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:12:53 +0000 (16:12 +0100)]
sfc: batch up RX delivery
Improves packet rate of 1-byte UDP receives by up to 10%.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 2 Jul 2018 15:12:45 +0000 (16:12 +0100)]
net: core: trivial netif_receive_skb_list() entry point
Just calls netif_receive_skb() in a loop.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 4 Jul 2018 02:36:55 +0000 (11:36 +0900)]
Merge branch 'sctp-fully-support-for-dscp-and-flowlabel-per-transport'
Xin Long says:
====================
sctp: fully support for dscp and flowlabel per transport
Now dscp and flowlabel are set from sock when sending the packets,
but being multi-homing, sctp also supports for dscp and flowlabel
per transport, which is described in section 8.1.12 in RFC6458.
v1->v2:
- define ip_queue_xmit as inline in net/ip.h, instead of exporting
it in Patch 1/5 according to David's suggestion.
- fix the param len check in sctp_s/getsockopt_peer_addr_params()
in Patch 3/5 to guarantee that an old app built with old kernel
headers could work on the newer kernel per Marcelo's point.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Mon, 2 Jul 2018 10:21:15 +0000 (18:21 +0800)]
sctp: check for ipv6_pinfo legal sndflow with flowlabel in sctp_v6_get_dst
The transport with illegal flowlabel should not be allowed to send
packets. Other transport protocols already denies this.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Mon, 2 Jul 2018 10:21:14 +0000 (18:21 +0800)]
sctp: add support for setting flowlabel when adding a transport
Struct sockaddr_in6 has the member sin6_flowinfo that includes the
ipv6 flowlabel, it should also support for setting flowlabel when
adding a transport whose ipaddr is from userspace.
Note that addrinfo in sctp_sendmsg is using struct in6_addr for
the secondary addrs, which doesn't contain sin6_flowinfo, and
it needs to copy sin6_flowinfo from the primary addr.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Mon, 2 Jul 2018 10:21:13 +0000 (18:21 +0800)]
sctp: add spp_ipv6_flowlabel and spp_dscp for sctp_paddrparams
spp_ipv6_flowlabel and spp_dscp are added in sctp_paddrparams in
this patch so that users could set sctp_sock/asoc/transport dscp
and flowlabel with spp_flags SPP_IPV6_FLOWLABEL or SPP_DSCP by
SCTP_PEER_ADDR_PARAMS , as described section 8.1.12 in RFC6458.
As said in last patch, it uses '| 0x100000' or '|0x1' to mark
flowlabel or dscp is set, so that their values could be set
to 0.
Note that to guarantee that an old app built with old kernel
headers could work on the newer kernel, the param's check in
sctp_g/setsockopt_peer_addr_params() is also improved, which
follows the way that sctp_g/setsockopt_delayed_ack() or some
other sockopts' process that accept two types of params does.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Mon, 2 Jul 2018 10:21:12 +0000 (18:21 +0800)]
sctp: add support for dscp and flowlabel per transport
Like some other per transport params, flowlabel and dscp are added
in transport, asoc and sctp_sock. By default, transport sets its
value from asoc's, and asoc does it from sctp_sock. flowlabel
only works for ipv6 transport.
Other than that they need to be passed down in sctp_xmit, flow4/6
also needs to set them before looking up route in get_dst.
Note that it uses '& 0x100000' to check if flowlabel is set and
'& 0x1' (tos 1st bit is unused) to check if dscp is set by users,
so that they could be set to 0 by sockopt in next patch.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Mon, 2 Jul 2018 10:21:11 +0000 (18:21 +0800)]
ipv4: add __ip_queue_xmit() that supports tos param
This patch introduces __ip_queue_xmit(), through which the callers
can pass tos param into it without having to set inet->tos. For
ipv6, ip6_xmit() already allows passing tclass parameter.
It's needed when some transport protocol doesn't use inet->tos,
like sctp's per transport dscp, which will be added in next patch.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Walleij [Sat, 30 Jun 2018 11:17:31 +0000 (13:17 +0200)]
net: dsa: Add Vitesse VSC73xx DSA router driver
This adds a DSA driver for:
Vitesse VSC7385 SparX-G5 5-port Integrated Gigabit Ethernet Switch
Vitesse VSC7388 SparX-G8 8-port Integrated Gigabit Ethernet Switch
Vitesse VSC7395 SparX-G5e 5+1-port Integrated Gigabit Ethernet Switch
Vitesse VSC7398 SparX-G8e 8-port Integrated Gigabit Ethernet Switch
These switches have a built-in 8051 CPU and can download and execute
firmware in this CPU. They can also be configured to use an external
CPU handling the switch in a memory-mapped manner by connecting to
that external CPU's memory bus.
This driver (currently) only takes control of the switch chip over
SPI and configures it to route packages around when connected to a
CPU port. The chip has embedded PHYs and VLAN support so we model it
using DSA as a best fit so we can easily add VLAN support and maybe
later also exploit the internal frame header to get more direct
control over the switch.
The four built-in GPIO lines are exposed using a standard GPIO chip.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Walleij [Sat, 30 Jun 2018 11:17:30 +0000 (13:17 +0200)]
net: phy: vitesse: Add support for VSC73xx
The VSC7385, VSC7388, VSC7395 and VSC7398 are integrated
switch/router chips for 5+1 or 8-port switches/routers. When
managed directly by Linux using DSA we need to do a special
set-up "dance" on the PHY. Unfortunately these sequences
switches the PHY to undocumented pages named 2a30 and 52b6
and does undocumented things. It is described by these opaque
sequences also in the reference manual. This is a best
effort to integrate it anyways.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Walleij [Sat, 30 Jun 2018 11:17:29 +0000 (13:17 +0200)]
net: dsa: Add DT bindings for Vitesse VSC73xx switches
This adds the device tree bindings for the Vitesse VSC73xx
switches. We also add the vendor name for Vitesse.
Cc: devicetree@vger.kernel.org
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 3 Jul 2018 23:53:53 +0000 (08:53 +0900)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2018-07-03
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Various improvements to bpftool and libbpf, that is, bpftool build
speed improvements, missing BPF program types added for detection
by section name, ability to load programs from '.text' section is
made to work again, and better bash completion handling, from Jakub.
2) Improvements to nfp JIT's map read handling which allows for optimizing
memcpy from map to packet, from Jiong.
3) New BPF sample is added which demonstrates XDP in combination with
bpf_perf_event_output() helper to sample packets on all CPUs, from Toke.
4) Add a new BPF kselftest case for tracking connect(2) BPF hooks
infrastructure in combination with TFO, from Andrey.
5) Extend the XDP/BPF xdp_rxq_info sample code with a cmdline option to
read payload from packet data in order to use it for benchmarking.
Also for '--action XDP_TX' option implement swapping of MAC addresses
to avoid drops on some hardware seen during testing, from Jesper.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 3 Jul 2018 14:23:48 +0000 (23:23 +0900)]
Merge branch 'aquantia-various-ethtool-ops-implementation'
Igor Russkikh says:
====================
net: aquantia: various ethtool ops implementation
In this patchset Anton Mikaev and I added some useful ethtool operations:
- ring size changes
- link renegotioation
- flow control management
The patch also improves init/deinit sequence.
V3 changes:
- After review and analysis it is clear that rtnl lock (which is
captured by default on ethtool ops) is enough to secure possible
overlapping of dev open/close. Thus, just dropping internal mutex.
V2 changes:
- using mutex to secure simultaneous dev close/open
- using state var to store/restore dev state
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Igor Russkikh [Mon, 2 Jul 2018 14:03:39 +0000 (17:03 +0300)]
net: aquantia: bump driver version
Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Anton Mikaev [Mon, 2 Jul 2018 14:03:38 +0000 (17:03 +0300)]
net: aquantia: Add renegotiate ethtool operation support
Adds ethtool -r|--negotiate operation support. It triggers special
control bit on FW interface causing FW to restart link negotiation.
Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com>
Signed-off-by: Anton Mikaev <amikaev@aquantia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Igor Russkikh [Mon, 2 Jul 2018 14:03:37 +0000 (17:03 +0300)]
net: aquantia: Implement rx/tx flow control ethtools callback
Runtime change of pause frame configuration (rx/tx flow control)
via ethtool.
Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Igor Russkikh [Mon, 2 Jul 2018 14:03:36 +0000 (17:03 +0300)]
net: aquantia: Improve adapter init/deinit logic
We now pass link drop status to FW on init/deinit. This is required
to inform FW that driver took/released a control on link.
FW then will manage its own state and device power profile based
on this information. To improve management we remove mpi_set
function which ambiguously took both state and speed parameters.
Deinit callback is now a part of FW ops, as it actually manages the FW.
Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Anton Mikaev [Mon, 2 Jul 2018 14:03:35 +0000 (17:03 +0300)]
net: aquantia: Ethtool based ring size configuration
Implemented ring size setup, min/max validation and reconfiguration in
runtime.
Signed-off-by: Anton Mikaev <amikaev@aquantia.com>
Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gustavo A. R. Silva [Mon, 2 Jul 2018 12:09:32 +0000 (07:09 -0500)]
net: stmmac_tc: use 64-bit arithmetic instead of 32-bit
Add suffix UL to constant 1024 in order to give the compiler complete
information about the proper arithmetic to use. Notice that this
constant is used in a context that expects an expression of type
u64 (64 bits, unsigned) and following expressions are currently
being evaluated using 32-bit arithmetic:
qopt->idleslope * 1024 * ptr
qopt->hicredit * 1024 * 8
qopt->locredit * 1024 * 8
Addresses-Coverity-ID:
1470246 ("Unintentional integer overflow")
Addresses-Coverity-ID:
1470248 ("Unintentional integer overflow")
Addresses-Coverity-ID:
1470249 ("Unintentional integer overflow")
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Acked-by: Jose Abreu <joabreu@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Murphy [Fri, 29 Jun 2018 15:35:46 +0000 (10:35 -0500)]
net: phy: DP83TC811: Fix SGMII enable/disable
If SGMII was selected in the DT then the device should
write the SGMII enable bit.
If SGMII is not selected in the DT then the SGMII bit
should be disabled.
Signed-off-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Murphy [Fri, 29 Jun 2018 15:35:45 +0000 (10:35 -0500)]
net: phy: DP83TC811: Add INT_STAT3
Add INT_STAT3 interrupt setting and clearing
support.
Signed-off-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 3 Jul 2018 01:26:50 +0000 (10:26 +0900)]
Merge ra./pub/scm/linux/kernel/git/davem/net
Simple overlapping changes in stmmac driver.
Adjust skb_gro_flush_final_remcsum function signature to make GRO list
changes in net-next, as per Stephen Rothwell's example merge
resolution.
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Mon, 2 Jul 2018 19:40:59 +0000 (12:40 -0700)]
Merge branch 'for-next' of git://git./linux/kernel/git/shli/md
Pull MD fixes from Shaohua Li:
"Two small fixes for MD:
- an error handling fix from me
- a recover bug fix for raid10 from BingJing"
* 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/md:
md/raid10: fix that replacement cannot complete recovery after reassemble
MD: cleanup resources in failure
Linus Torvalds [Mon, 2 Jul 2018 19:38:14 +0000 (12:38 -0700)]
Merge tag 'for-linus' of git://github.com/stffrdhrn/linux
Pull OpenRISC fixes from Stafford Horne:
"Two fixes for issues which were breaking OpenRISC boot:
- Fix bug in __pte_free_tlb() exposed in 4.18 by Matthew Wilcox's
page table flag addition.
- Fix issue booting on real hardware if delay slot detection
emulation is disabled"
* tag 'for-linus' of git://github.com/stffrdhrn/linux:
openrisc: entry: Fix delay slot exception detection
openrisc: Call destructor during __pte_free_tlb
Linus Torvalds [Mon, 2 Jul 2018 18:18:28 +0000 (11:18 -0700)]
Merge git://git./linux/kernel/git/davem/net
Pull networking fixes from David Miller:
1) Verify netlink attributes properly in nf_queue, from Eric Dumazet.
2) Need to bump memory lock rlimit for test_sockmap bpf test, from
Yonghong Song.
3) Fix VLAN handling in lan78xx driver, from Dave Stevenson.
4) Fix uninitialized read in nf_log, from Jann Horn.
5) Fix raw command length parsing in mlx5, from Alex Vesker.
6) Cleanup loopback RDS connections upon netns deletion, from Sowmini
Varadhan.
7) Fix regressions in FIB rule matching during create, from Jason A.
Donenfeld and Roopa Prabhu.
8) Fix mpls ether type detection in nfp, from Pieter Jansen van Vuuren.
9) More bpfilter build fixes/adjustments from Masahiro Yamada.
10) Fix XDP_{TX,REDIRECT} flushing in various drivers, from Jesper
Dangaard Brouer.
11) fib_tests.sh file permissions were broken, from Shuah Khan.
12) Make sure BH/preemption is disabled in data path of mac80211, from
Denis Kenzior.
13) Don't ignore nla_parse_nested() return values in nl80211, from
Johannes berg.
14) Properly account sock objects ot kmemcg, from Shakeel Butt.
15) Adjustments to setting bpf program permissions to read-only, from
Daniel Borkmann.
16) TCP Fast Open key endianness was broken, it always took on the host
endiannness. Whoops. Explicitly make it little endian. From Yuching
Cheng.
17) Fix prefix route setting for link local addresses in ipv6, from
David Ahern.
18) Potential Spectre v1 in zatm driver, from Gustavo A. R. Silva.
19) Various bpf sockmap fixes, from John Fastabend.
20) Use after free for GRO with ESP, from Sabrina Dubroca.
21) Passing bogus flags to crypto_alloc_shash() in ipv6 SR code, from
Eric Biggers.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (87 commits)
qede: Adverstise software timestamp caps when PHC is not available.
qed: Fix use of incorrect size in memcpy call.
qed: Fix setting of incorrect eswitch mode.
qed: Limit msix vectors in kdump kernel to the minimum required count.
ipvlan: call dev_change_flags when ipvlan mode is reset
ipv6: sr: fix passing wrong flags to crypto_alloc_shash()
net: fix use-after-free in GRO with ESP
tcp: prevent bogus FRTO undos with non-SACK flows
bpf: sockhash, add release routine
bpf: sockhash fix omitted bucket lock in sock_close
bpf: sockmap, fix smap_list_map_remove when psock is in many maps
bpf: sockmap, fix crash when ipv6 sock is added
net: fib_rules: bring back rule_exists to match rule during add
hv_netvsc: split sub-channel setup into async and sync
net: use dev_change_tx_queue_len() for SIOCSIFTXQLEN
atm: zatm: Fix potential Spectre v1
s390/qeth: consistently re-enable device features
s390/qeth: don't clobber buffer on async TX completion
s390/qeth: avoid using is_multicast_ether_addr_64bits on (u8 *)[6]
s390/qeth: fix race when setting MAC address
...
David S. Miller [Mon, 2 Jul 2018 13:49:14 +0000 (22:49 +0900)]
Merge branch 'hns3-a-few-code-improvements'
Peng Li says:
====================
net: hns3: a few code improvements
This patchset removes some redundant code and fixes a few code
stylistic issues from internal concentrated review,
no functional changes introduced.
---
Change log:
V1 -> V2:
1, remove a patch according to the comment reported by David Miller.
---
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Peng Li [Mon, 2 Jul 2018 07:50:26 +0000 (15:50 +0800)]
net: hns3: modify hnae_ to hnae3_
For consistency, prefix hnae_ should be modified to hnae3_.
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Mon, 2 Jul 2018 07:50:25 +0000 (15:50 +0800)]
net: hns3: use dma_zalloc_coherent instead of kzalloc/dma_map_single
Reference to Documentation/DMA-API-HOWTO.txt,
Streaming DMA mappings which are usually mapped for one DMA transfer,
Network card DMA ring descriptors should use Consistent DMA mappings.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Mon, 2 Jul 2018 07:50:24 +0000 (15:50 +0800)]
net: hns3: give default option while dependency HNS3 set
Give default option for HNS3_HCLGE and HNS3_ENET will be helpful,
while dependency HNS3 is set. Meanwhile, use "if HNS3" section
instead of all the "depends on HNS3".
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Mon, 2 Jul 2018 07:50:23 +0000 (15:50 +0800)]
net: hns3: remove some unused members of some structures
Some members in struct hns3_enet_tqp_vector, struct hnae3_client
and struct hnae3_ae_algo are unused.
This patch removes them.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Mon, 2 Jul 2018 07:50:22 +0000 (15:50 +0800)]
net: hns3: remove a redundant hclge_cmd_csq_done
Set complete in the first hclge_cmd_csq_done of hclge_cmd_send,
and check if complete later, unnecessary to do hclge_cmd_csq_done
again.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Mon, 2 Jul 2018 07:50:21 +0000 (15:50 +0800)]
net: hns3: simplify hclge_cmd_csq_clean
csq is used as a ring buffer, the value of the desc will be replaced
in next use. This patch removes the unnecessary memset, and just
updates the next_to_clean.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Mon, 2 Jul 2018 07:50:20 +0000 (15:50 +0800)]
net: hns3: remove some redundant assignments
Remove some redundant assignments.
desc->flag = cpu_to_le16(HCLGE_CMD_FLAG_NO_INTR | HCLGE_CMD_FLAG_IN)
has set bit HCLGE_CMD_FLAG_WR to zero, so does others.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Mon, 2 Jul 2018 07:50:19 +0000 (15:50 +0800)]
net: hns3: remove useless code in hclge_cmd_send
There are some useless type cast, print in hclge_cmd_send.
This patch removes them.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Huazhong Tan [Mon, 2 Jul 2018 07:50:18 +0000 (15:50 +0800)]
net: hns3: remove unused hclge_ring_to_dma_dir
hclge_ring_to_dma_dir is unused anywhere.
This patch removes it.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Mon, 2 Jul 2018 07:37:21 +0000 (08:37 +0100)]
atm: zatm: remove redundant pointer zatm_dev
Pointer zatm_dev is being assigned but is never used hence it is redundant
and can be removed.
Cleans up clang warning:
warning: variable 'zatm_dev' set but not used [-Wunused-but-set-variable]
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Mon, 2 Jul 2018 06:08:13 +0000 (08:08 +0200)]
net: phy: realtek: add support for RTL8211C
RTL8211C has an issue when operating in Gigabit slave mode, therefore
genphy driver can't be used. See also this U-boot change.
https://lists.denx.de/pipermail/u-boot/2016-March/249712.html
Add a PHY driver for this chip with the quirk to force Gigabit master
mode. As a note: This will make it impossible to connect two network
ports directly which both are driven by a RTl8211C.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Roman Mashak [Mon, 2 Jul 2018 04:02:02 +0000 (00:02 -0400)]
net sched actions: add extack messages in pedit action
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guenter Roeck [Sun, 1 Jul 2018 20:57:38 +0000 (13:57 -0700)]
TTY: isdn: Replace strncpy with memcpy
gcc 8.1.0 complains:
drivers/isdn/i4l/isdn_tty.c: In function 'isdn_tty_suspend.isra.1':
drivers/isdn/i4l/isdn_tty.c:790:3: warning:
'strncpy' output truncated before terminating nul copying
as many bytes from a string as its length
drivers/isdn/i4l/isdn_tty.c:778:6: note: length computed here
drivers/isdn/i4l/isdn_tty.c: In function 'isdn_tty_resume':
drivers/isdn/i4l/isdn_tty.c:880:3: warning:
'strncpy' output truncated before terminating nul copying
as many bytes from a string as its length
drivers/isdn/i4l/isdn_tty.c:817:6: note: length computed here
Using strncpy() is indeed less than perfect since the length of data to
be copied has already been determined with strlen(). Replace strncpy()
with memcpy() to address the warning and optimize the code a little.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Sun, 1 Jul 2018 17:14:58 +0000 (19:14 +0200)]
net: phy: realtek: add missing entry for RTL8211 to mdio_device_id table
When adding support for RTL8211 I forgot to update the mdio_device_id
table.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Fixes: d241d4aac93f ("net: phy: realtek: add support for RTL8211")
Signed-off-by: David S. Miller <davem@davemloft.net>
Yafang Shao [Sun, 1 Jul 2018 15:31:30 +0000 (23:31 +0800)]
net: expose sk wmem in sock_exceed_buf_limit tracepoint
Currently trace_sock_exceed_buf_limit() only show rmem info,
but wmem limit may also be hit.
So expose wmem info in this tracepoint as well.
Regarding memcg, I think it is better to introduce a new tracepoint(if
that is needed), i.e. trace_memcg_limit_hit other than show memcg info in
trace_sock_exceed_buf_limit.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Sat, 30 Jun 2018 22:25:19 +0000 (00:25 +0200)]
r8169: remove old PHY reset hack
This hack (affecting the non-PCIe models only) was introduced in 2004
to deal with link negotiation failures in 1GBit mode. Based on a
comment in the r8169 vendor driver I assume the issue affects RTL8169sb
in combination with particular 1GBit switch models.
Resetting the PHY every 10s and hoping that one fine day we will make
it to establish the link seems to be very hacky to me. I'd say:
If 1GBit doesn't work reliably in a users environment then the user
should remove 1GBit from the advertised modes, e.g. by using
ethtool -s <if> advertise <10/100 modes>
If the issue affects one chip version only and that with most link
partners, then we could also think of removing 1GBit from the
advertised modes for this chip version in the driver.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 2 Jul 2018 11:41:31 +0000 (20:41 +0900)]
Merge branch 'qed-fixes'
Sudarsana Reddy Kalluru says:
====================
qed*: Fix series.
The patch series addresses few issues in the qed* drivers.
Please consider applying it to 'net' branch.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sudarsana Reddy Kalluru [Mon, 2 Jul 2018 03:03:08 +0000 (20:03 -0700)]
qede: Adverstise software timestamp caps when PHC is not available.
When ptp clock is not available for a PF (e.g., higher PFs in NPAR mode),
get-tsinfo() callback should return the software timestamp capabilities
instead of returning the error.
Fixes: 4c55215c ("qede: Add driver support for PTP")
Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sudarsana Reddy Kalluru [Mon, 2 Jul 2018 03:03:07 +0000 (20:03 -0700)]
qed: Fix use of incorrect size in memcpy call.
Use the correct size value while copying chassis/port id values.
Fixes: 6ad8c632e ("qed: Add support for query/config dcbx.")
Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sudarsana Reddy Kalluru [Mon, 2 Jul 2018 03:03:06 +0000 (20:03 -0700)]
qed: Fix setting of incorrect eswitch mode.
By default, driver sets the eswitch mode incorrectly as VEB (virtual
Ethernet bridging).
Need to set VEB eswitch mode only when sriov is enabled, and it should be
to set NONE by default. The patch incorporates this change.
Fixes: 0fefbfbaa ("qed*: Management firmware - notifications and defaults")
Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sudarsana Reddy Kalluru [Mon, 2 Jul 2018 03:03:05 +0000 (20:03 -0700)]
qed: Limit msix vectors in kdump kernel to the minimum required count.
Memory size is limited in the kdump kernel environment. Allocation of more
msix-vectors (or queues) consumes few tens of MBs of memory, which might
lead to the kdump kernel failure.
This patch adds changes to limit the number of MSI-X vectors in kdump
kernel to minimum required value (i.e., 2 per engine).
Fixes: fe56b9e6a ("qed: Add module with basic common support")
Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hangbin Liu [Sun, 1 Jul 2018 08:21:21 +0000 (16:21 +0800)]
ipvlan: call dev_change_flags when ipvlan mode is reset
After we change the ipvlan mode from l3 to l2, or vice versa, we only
reset IFF_NOARP flag, but don't flush the ARP table cache, which will
cause eth->h_dest to be equal to eth->h_source in ipvlan_xmit_mode_l2().
Then the message will not come out of host.
Here is the reproducer on local host:
ip link set eth1 up
ip addr add 192.168.1.1/24 dev eth1
ip link add link eth1 ipvlan1 type ipvlan mode l3
ip netns add net1
ip link set ipvlan1 netns net1
ip netns exec net1 ip link set ipvlan1 up
ip netns exec net1 ip addr add 192.168.2.1/24 dev ipvlan1
ip route add 192.168.2.0/24 via 192.168.1.2
ping 192.168.2.2 -c 2
ip netns exec net1 ip link set ipvlan1 type ipvlan mode l2
ping 192.168.2.2 -c 2
Add the same configuration on remote host. After we set the mode to l2,
we could find that the src/dst MAC addresses are the same on eth1:
21:26:06.648565 00:b7:13:ad:d3:05 > 00:b7:13:ad:d3:05, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 58356, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.2.1 > 192.168.2.2: ICMP echo request, id 22686, seq 1, length 64
Fix this by calling dev_change_flags(), which will call netdevice notifier
with flag change info.
v2:
a) As pointed out by Wang Cong, check return value for dev_change_flags() when
change dev flags.
b) As suggested by Stefano and Sabrina, move flags setting before l3mdev_ops.
So we don't need to redo ipvlan_{, un}register_nf_hook() again in err path.
Reported-by: Jianlin Shi <jishi@redhat.com>
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: Sabrina Dubroca <sd@queasysnail.net>
Fixes: 2ad7bf3638411 ("ipvlan: Initial check-in of the IPVLAN driver.")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Biggers [Sat, 30 Jun 2018 22:26:56 +0000 (15:26 -0700)]
ipv6: sr: fix passing wrong flags to crypto_alloc_shash()
The 'mask' argument to crypto_alloc_shash() uses the CRYPTO_ALG_* flags,
not 'gfp_t'. So don't pass GFP_KERNEL to it.
Fixes: bf355b8d2c30 ("ipv6: sr: add core files for SR HMAC support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Sat, 30 Jun 2018 20:39:24 +0000 (21:39 +0100)]
netdevsim: fix sa_idx out of bounds check
Currently if sa_idx is equal to NSIM_IPSEC_MAX_SA_COUNT then
an out-of-bounds read on ipsec->sa will occur. Fix the
incorrect bounds check by using >= rather than >.
Detected by CoverityScan, CID#
1470226 ("Out-of-bounds-read")
Fixes: 7699353da875 ("netdevsim: add ipsec offload testing")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sabrina Dubroca [Sat, 30 Jun 2018 15:38:55 +0000 (17:38 +0200)]
net: fix use-after-free in GRO with ESP
Since the addition of GRO for ESP, gro_receive can consume the skb and
return -EINPROGRESS. In that case, the lower layer GRO handler cannot
touch the skb anymore.
Commit
5f114163f2f5 ("net: Add a skb_gro_flush_final helper.") converted
some of the gro_receive handlers that can lead to ESP's gro_receive so
that they wouldn't access the skb when -EINPROGRESS is returned, but
missed other spots, mainly in tunneling protocols.
This patch finishes the conversion to using skb_gro_flush_final(), and
adds a new helper, skb_gro_flush_final_remcsum(), used in VXLAN and
GUE.
Fixes: 5f114163f2f5 ("net: Add a skb_gro_flush_final helper.")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 2 Jul 2018 00:06:24 +0000 (09:06 +0900)]
Merge branch 'xps-symmretric-queue-selection'
Amritha Nambiar says:
====================
Symmetric queue selection using XPS for Rx queues
This patch series implements support for Tx queue selection based on
Rx queue(s) map. This is done by configuring Rx queue(s) map per Tx-queue
using sysfs attribute. If the user configuration for Rx queues does
not apply, then the Tx queue selection falls back to XPS using CPUs and
finally to hashing.
XPS is refactored to support Tx queue selection based on either the
CPUs map or the Rx-queues map. The config option CONFIG_XPS needs to be
enabled. By default no receive queues are configured for the Tx queue.
- /sys/class/net/<dev>/queues/tx-*/xps_rxqs
A set of receive queues can be mapped to a set of transmit queues (many:many),
although the common use case is a 1:1 mapping. This will enable sending
packets on the same Tx-Rx queue association as this is useful for busy polling
multi-threaded workloads where it is not possible to pin the threads to
a CPU. This is a rework of Sridhar's patch for symmetric queueing via
socket option:
https://www.spinics.net/lists/netdev/msg453106.html
Testing Hints:
Kernel: Linux 4.17.0-rc7+
Interface:
driver: ixgbe
version: 5.1.0-k
firmware-version: 0x00015e0b
Configuration:
ethtool -L $iface combined 16
ethtool -C $iface rx-usecs 1000
sysctl net.core.busy_poll=1000
ATR disabled:
ethtool -K $iface ntuple on
Workload:
Modified memcached that changes the thread selection policy to be based
on the incoming rx-queue of a connection using SO_INCOMING_NAPI_ID socket
option. The default is round-robin.
Default: No rxqs_map configured
Symmetric queues: Enable rxqs_map for all queues 1:1 mapped to Tx queue
System:
Architecture: x86_64
CPU(s): 72
Model name: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
16 threads 400K requests/sec
=============================
-------------------------------------------------------------------------------
Default Symmetric queues
-------------------------------------------------------------------------------
RTT min/avg/max 4/51/2215 2/30/5163
(usec)
intr/sec 26655 18606
contextswitch/sec 5145 4044
insn per cycle 0.43 0.72
cache-misses 6.919 4.310
(% of all cache refs)
L1-dcache-load- 4.49 3.29
-misses
(% of all L1-dcache hits)
LLC-load-misses 13.26 8.96
(% of all LL-cache hits)
-------------------------------------------------------------------------------
32 threads 400K requests/sec
=============================
-------------------------------------------------------------------------------
Default Symmetric queues
-------------------------------------------------------------------------------
RTT min/avg/max 10/112/5562 9/46/4637
(usec)
intr/sec 30456 27666
contextswitch/sec 7552 5133
insn per cycle 0.41 0.49
cache-misses 9.357 2.769
(% of all cache refs)
L1-dcache-load- 4.09 3.98
-misses
(% of all L1-dcache hits)
LLC-load-misses 12.96 3.96
(% of all LL-cache hits)
-------------------------------------------------------------------------------
16 threads 800K requests/sec
=============================
-------------------------------------------------------------------------------
Default Symmetric queues
-------------------------------------------------------------------------------
RTT min/avg/max 5/151/4989 9/69/2611
(usec)
intr/sec 35686 22907
contextswitch/sec 25522 12281
insn per cycle 0.67 0.74
cache-misses 8.652 6.38
(% of all cache refs)
L1-dcache-load- 3.19 2.86
-misses
(% of all L1-dcache hits)
LLC-load-misses 16.53 11.99
(% of all LL-cache hits)
-------------------------------------------------------------------------------
32 threads 800K requests/sec
=============================
-------------------------------------------------------------------------------
Default Symmetric queues
-------------------------------------------------------------------------------
RTT min/avg/max 6/163/6152 8/88/4209
(usec)
intr/sec 47079 26548
contextswitch/sec 42190 39168
insn per cycle 0.45 0.54
cache-misses 8.798 4.668
(% of all cache refs)
L1-dcache-load- 6.55 6.29
-misses
(% of all L1-dcache hits)
LLC-load-misses 13.91 10.44
(% of all LL-cache hits)
-------------------------------------------------------------------------------
v6:
- Changed the names of some functions to begin with net_if.
- Cleaned up sk_tx_queue_set/sk_rx_queue_set functions.
- Added sk_rx_queue_clear to make it consistent with tx_queue_mapping
initialization.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Amritha Nambiar [Sat, 30 Jun 2018 04:27:12 +0000 (21:27 -0700)]
Documentation: Add explanation for XPS using Rx-queue(s) map
Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amritha Nambiar [Sat, 30 Jun 2018 04:27:07 +0000 (21:27 -0700)]
net-sysfs: Add interface for Rx queue(s) map per Tx queue
Extend transmit queue sysfs attribute to configure Rx queue(s) map
per Tx queue. By default no receive queues are configured for the
Tx queue.
- /sys/class/net/eth0/queues/tx-*/xps_rxqs
Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amritha Nambiar [Sat, 30 Jun 2018 04:27:02 +0000 (21:27 -0700)]
net: Enable Tx queue selection based on Rx queues
This patch adds support to pick Tx queue based on the Rx queue(s) map
configuration set by the admin through the sysfs attribute
for each Tx queue. If the user configuration for receive queue(s) map
does not apply, then the Tx queue selection falls back to CPU(s) map
based selection and finally to hashing.
Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amritha Nambiar [Sat, 30 Jun 2018 04:26:57 +0000 (21:26 -0700)]
net: Record receive queue number for a connection
This patch adds a new field to sock_common 'skc_rx_queue_mapping'
which holds the receive queue number for the connection. The Rx queue
is marked in tcp_finish_connect() to allow a client app to do
SO_INCOMING_NAPI_ID after a connect() call to get the right queue
association for a socket. Rx queue is also marked in tcp_conn_request()
to allow syn-ack to go on the right tx-queue associated with
the queue on which syn is received.
Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amritha Nambiar [Sat, 30 Jun 2018 04:26:51 +0000 (21:26 -0700)]
net: sock: Change tx_queue_mapping in sock_common to unsigned short
Change 'skc_tx_queue_mapping' field in sock_common structure from
'int' to 'unsigned short' type with ~0 indicating unset and
other positive queue values being set. This will accommodate adding
a new 'unsigned short' field in sock_common in the next patch for
rx_queue_mapping.
Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amritha Nambiar [Sat, 30 Jun 2018 04:26:46 +0000 (21:26 -0700)]
net: Use static_key for XPS maps
Use static_key for XPS maps to reduce the cost of extra map checks,
similar to how it is used for RPS and RFS. This includes static_key
'xps_needed' for XPS and another for 'xps_rxqs_needed' for XPS using
Rx queues map.
Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amritha Nambiar [Sat, 30 Jun 2018 04:26:41 +0000 (21:26 -0700)]
net: Refactor XPS for CPUs and Rx queues
Refactor XPS code to support Tx queue selection based on
CPU(s) map or Rx queue(s) map.
Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Sun, 1 Jul 2018 23:04:53 +0000 (16:04 -0700)]
Linux 4.18-rc3
Linus Torvalds [Sun, 1 Jul 2018 19:38:16 +0000 (12:38 -0700)]
Merge tag 'for-4.18-rc2-tag' of git://git./linux/kernel/git/kdave/linux
Pull btrfs fixes from David Sterba:
"We have a few regression fixes for qgroup rescan status tracking and
the vm_fault_t conversion that mixed up the error values"
* tag 'for-4.18-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
Btrfs: fix mount failure when qgroup rescan is in progress
Btrfs: fix regression in btrfs_page_mkwrite() from vm_fault_t conversion
btrfs: quota: Set rescan progress to (u64)-1 if we hit last leaf
Linus Torvalds [Sun, 1 Jul 2018 19:32:19 +0000 (12:32 -0700)]
Merge branch 'fixes' of git://git./linux/kernel/git/viro/vfs
Pull vfs fix from Al Viro:
"Followup to procfs-seq_file series this window"
This fixes a memory leak by making sure that proc seq files release any
private data on close. The 'proc_seq_open' has to be properly paired
with 'proc_seq_release' that releases the extra private data.
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
proc: add proc_seq_release
Linus Torvalds [Sun, 1 Jul 2018 19:20:20 +0000 (12:20 -0700)]
Merge tag 'staging-4.18-rc3' of git://git./linux/kernel/git/gregkh/staging
Pull staging/IIO fixes from Greg KH:
"Here are a few small staging and IIO driver fixes for 4.18-rc3.
Nothing major or big, all just fixes for reported problems since
4.18-rc1. All of these have been in linux-next this week with no
reported problems"
* tag 'staging-4.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging:
staging: android: ion: Return an ERR_PTR in ion_map_kernel
staging: comedi: quatech_daqp_cs: fix no-op loop daqp_ao_insn_write()
iio: imu: inv_mpu6050: Fix probe() failure on older ACPI based machines
iio: buffer: fix the function signature to match implementation
iio: mma8452: Fix ignoring MMA8452_INT_DRDY
iio: tsl2x7x/tsl2772: avoid potential division by zero
iio: pressure: bmp280: fix relative humidity unit
Linus Torvalds [Sun, 1 Jul 2018 19:05:53 +0000 (12:05 -0700)]
Merge tag 'tty-4.18-rc3' of git://git./linux/kernel/git/gregkh/tty
Pull tty/serial fixes from Greg KH:
"Here are five fixes for the tty core and some serial drivers.
The tty core ones fix some security and other issues reported by the
syzbot that I have taken too long in responding to (sorry Tetsuo!).
The 8350 serial driver fix resolves an issue of devices that used to
work properly stopping working as they shouldn't have been added to a
blacklist.
All of these have been in linux-next for a few days with no reported
issues"
* tag 'tty-4.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
vt: prevent leaking uninitialized data to userspace via /dev/vcs*
serdev: fix memleak on module unload
serial: 8250_pci: Remove stalled entries in blacklist
n_tty: Access echo_* variables carefully.
n_tty: Fix stall at n_tty_receive_char_special().
Linus Torvalds [Sun, 1 Jul 2018 18:50:16 +0000 (11:50 -0700)]
Merge tag 'usb-4.18-rc3' of git://git./linux/kernel/git/gregkh/usb
Pull USB fixes from Greg KH:
"Here is a number of USB gadget and other driver fixes for 4.18-rc3.
There's a bunch of them here, most of them being gadget driver and
xhci host controller fixes for reported issues (as normal), but there
are also some new device ids, and some fixes for the typec code.
There is an acpi core patch in here that was acked by the acpi
maintainer as it is needed for the typec fixes in order to properly
solve a problem in that driver.
All of these have been in linux-next this week with no reported
issues"
* tag 'usb-4.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (33 commits)
usb: chipidea: host: fix disconnection detect issue
usb: typec: tcpm: fix logbuffer index is wrong if _tcpm_log is re-entered
typec: tcpm: Fix a msecs vs jiffies bug
NFC: pn533: Fix wrong GFP flag usage
usb: cdc_acm: Add quirk for Uniden UBC125 scanner
staging/typec: fix tcpci_rt1711h build errors
usb: typec: ucsi: Fix for incorrect status data issue
usb: typec: ucsi: acpi: Workaround for cache mode issue
acpi: Add helper for deactivating memory region
usb: xhci: increase CRS timeout value
usb: xhci: tegra: fix runtime PM error handling
usb: xhci: remove the code build warning
xhci: Fix kernel oops in trace_xhci_free_virt_device
xhci: Fix perceived dead host due to runtime suspend race with event handler
dwc2: gadget: Fix ISOC IN DDMA PID bitfield value calculation
usb: gadget: dwc2: fix memory leak in gadget_init()
usb: gadget: composite: fix delayed_status race condition when set_interface
usb: dwc2: fix isoc split in transfer with no data
usb: dwc2: alloc dma aligned buffer for isoc split in
usb: dwc2: fix the incorrect bitmaps for the ports of multi_tt hub
...
Linus Torvalds [Sun, 1 Jul 2018 17:45:13 +0000 (10:45 -0700)]
Merge tag 'dma-mapping-4.18-2' of git://git.infradead.org/users/hch/dma-mapping
Pull dma mapping fixlet from Christoph Hellwig:
"Add a missing export required by riscv and unicore"
* tag 'dma-mapping-4.18-2' of git://git.infradead.org/users/hch/dma-mapping:
swiotlb: export swiotlb_dma_ops
Ilpo Järvinen [Fri, 29 Jun 2018 10:07:53 +0000 (13:07 +0300)]
tcp: prevent bogus FRTO undos with non-SACK flows
If SACK is not enabled and the first cumulative ACK after the RTO
retransmission covers more than the retransmitted skb, a spurious
FRTO undo will trigger (assuming FRTO is enabled for that RTO).
The reason is that any non-retransmitted segment acknowledged will
set FLAG_ORIG_SACK_ACKED in tcp_clean_rtx_queue even if there is
no indication that it would have been delivered for real (the
scoreboard is not kept with TCPCB_SACKED_ACKED bits in the non-SACK
case so the check for that bit won't help like it does with SACK).
Having FLAG_ORIG_SACK_ACKED set results in the spurious FRTO undo
in tcp_process_loss.
We need to use more strict condition for non-SACK case and check
that none of the cumulatively ACKed segments were retransmitted
to prove that progress is due to original transmissions. Only then
keep FLAG_ORIG_SACK_ACKED set, allowing FRTO undo to proceed in
non-SACK case.
(FLAG_ORIG_SACK_ACKED is planned to be renamed to FLAG_ORIG_PROGRESS
to better indicate its purpose but to keep this change minimal, it
will be done in another patch).
Besides burstiness and congestion control violations, this problem
can result in RTO loop: When the loss recovery is prematurely
undoed, only new data will be transmitted (if available) and
the next retransmission can occur only after a new RTO which in case
of multiple losses (that are not for consecutive packets) requires
one RTO per loss to recover.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Tested-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Stafford Horne [Sun, 1 Jul 2018 05:17:36 +0000 (14:17 +0900)]
openrisc: entry: Fix delay slot exception detection
Originally in patch
e6d20c55a4 ("openrisc: entry: Fix delay slot
detection") I fixed delay slot detection, but only for QEMU. We missed
that hardware delay slot detection using delay slot exception flag (DSX)
was still broken. This was because QEMU set the DSX flag in both
pre-exception supervision register (ESR) and supervision register (SR)
register, but on real hardware the DSX flag is only set on the SR
register during exceptions.
Fix this by carrying the DSX flag into the SR register during exception.
We also update the DSX flag read locations to read the value from the SR
register not the pt_regs SR register which represents ESR. The ESR
should never have the DSX flag set.
In the process I updated/removed a few comments to match the current
state. Including removing a comment saying that the DSX detection logic
was inefficient and needed to be rewritten.
I have tested this on QEMU with a patch ensuring it matches the hardware
specification.
Link: https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00000.html
Fixes: e6d20c55a4 ("openrisc: entry: Fix delay slot detection")
Signed-off-by: Stafford Horne <shorne@gmail.com>
David S. Miller [Sun, 1 Jul 2018 00:27:44 +0000 (09:27 +0900)]
Merge git://git./pub/scm/linux/kernel/git/bpf/bpf
Daniel Borkmann says:
====================
pull-request: bpf 2018-07-01
The following pull-request contains BPF updates for your *net* tree.
The main changes are:
1) A bpf_fib_lookup() helper fix to change the API before freeze to
return an encoding of the FIB lookup result and return the nexthop
device index in the params struct (instead of device index as return
code that we had before), from David.
2) Various BPF JIT fixes to address syzkaller fallout, that is, do not
reject progs when set_memory_*() fails since it could still be RO.
Also arm32 JIT was not using bpf_jit_binary_lock_ro() API which was
an issue, and a memory leak in s390 JIT found during review, from
Daniel.
3) Multiple fixes for sockmap/hash to address most of the syzkaller
triggered bugs. Usage with IPv6 was crashing, a GPF in bpf_tcp_close(),
a missing sock_map_release() routine to hook up to callbacks, and a
fix for an omitted bucket lock in sock_close(), from John.
4) Two bpftool fixes to remove duplicated error message on program load,
and another one to close the libbpf object after program load. One
additional fix for nfp driver's BPF offload to avoid stopping offload
completely if replace of program failed, from Jakub.
5) Couple of BPF selftest fixes that bail out in some of the test
scripts if the user does not have the right privileges, from Jeffrin.
6) Fixes in test_bpf for s390 when CONFIG_BPF_JIT_ALWAYS_ON is set
where we need to set the flag that some of the test cases are expected
to fail, from Kleber.
7) Fix to detangle BPF_LIRC_MODE2 dependency from CONFIG_CGROUP_BPF
since it has no relation to it and lirc2 users often have configs
without cgroups enabled and thus would not be able to use it, from Sean.
8) Fix a selftest failure in sockmap by removing a useless setrlimit()
call that would set a too low limit where at the same time we are
already including bpf_rlimit.h that does the job, from Yonghong.
9) Fix BPF selftest config with missing missing NET_SCHED, from Anders.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>