tcp: tcp_probe: use spin_lock_bh()
authorEric Dumazet <edumazet@google.com>
Wed, 15 Feb 2017 01:11:14 +0000 (17:11 -0800)
committerDavid S. Miller <davem@davemloft.net>
Wed, 15 Feb 2017 03:19:39 +0000 (22:19 -0500)
tcp_rcv_established() can now run in process context.

We need to disable BH while acquiring tcp probe spinlock,
or risk a deadlock.

Fixes: 5413d1babe8f ("net: do not block BH while processing socket backlog")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Ricardo Nabinger Sanchez <rnsanchez@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/ipv4/tcp_probe.c

index f6c50af24a64737672f7ede2ff41158bfed5f1b4..3d063eb3784828b142874c92fd2db026bea0f3b3 100644 (file)
@@ -117,7 +117,7 @@ static void jtcp_rcv_established(struct sock *sk, struct sk_buff *skb,
             (fwmark > 0 && skb->mark == fwmark)) &&
            (full || tp->snd_cwnd != tcp_probe.lastcwnd)) {
 
-               spin_lock(&tcp_probe.lock);
+               spin_lock_bh(&tcp_probe.lock);
                /* If log fills, just silently drop */
                if (tcp_probe_avail() > 1) {
                        struct tcp_log *p = tcp_probe.log + tcp_probe.head;
@@ -157,7 +157,7 @@ static void jtcp_rcv_established(struct sock *sk, struct sk_buff *skb,
                        tcp_probe.head = (tcp_probe.head + 1) & (bufsize - 1);
                }
                tcp_probe.lastcwnd = tp->snd_cwnd;
-               spin_unlock(&tcp_probe.lock);
+               spin_unlock_bh(&tcp_probe.lock);
 
                wake_up(&tcp_probe.wait);
        }