Stefan Hajnoczi [Sat, 21 Jul 2012 06:55:37 +0000 (06:55 +0000)]
vhost: make vhost work queue visible
The vhost work queue allows processing to be done in vhost worker thread
context, which uses the owner process mm. Access to the vring and guest
memory is typically only possible from vhost worker context so it is
useful to allow work to be queued directly by users.
Currently vhost_net only uses the poll wrappers which do not expose the
work queue functions. However, for tcm_vhost (vhost_scsi) it will be
necessary to queue custom work.
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: Zhi Yong Wu <wuzhy@cn.ibm.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Stefan Hajnoczi [Sat, 21 Jul 2012 06:55:36 +0000 (06:55 +0000)]
vhost: Separate vhost-net features from vhost features
In order for other vhost devices to use the VHOST_FEATURES bits the
vhost-net specific bits need to be moved to their own VHOST_NET_FEATURES
constant.
(Asias: Update drivers/vhost/test.c to use VHOST_NET_FEATURES)
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: Zhi Yong Wu <wuzhy@cn.ibm.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Asias He <asias@redhat.com>
Signed-off-by: Nicholas A. Bellinger <nab@risingtidesystems.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Denis Efremov [Fri, 20 Jul 2012 21:54:34 +0000 (01:54 +0400)]
forcedeth: spin_unlock_irq in interrupt handler fix
The replacement of spin_lock_irq/spin_unlock_irq pair in interrupt
handler by spin_lock_irqsave/spin_lock_irqrestore pair.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Denis Efremov <yefremov.denis@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 20 Jul 2012 23:16:34 +0000 (16:16 -0700)]
Merge branch 'master' of git://git./linux/kernel/git/jesse/openvswitch
Jesse Gross says:
====================
A few bug fixes and small enhancements for net-next/3.6.
...
Ansis Atteka (1):
openvswitch: Do not send notification if ovs_vport_set_options() failed
Ben Pfaff (1):
openvswitch: Check gso_type for correct sk_buff in queue_gso_packets().
Jesse Gross (2):
openvswitch: Enable retrieval of TCP flags from IPv6 traffic.
openvswitch: Reset upper layer protocol info on internal devices.
Leo Alterman (1):
openvswitch: Fix typo in documentation.
Pravin B Shelar (1):
openvswitch: Check currect return value from skb_gso_segment()
Raju Subramanian (1):
openvswitch: Replace Nicira Networks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 20 Jul 2012 23:00:53 +0000 (16:00 -0700)]
ipv4: Fix neigh lookup keying over loopback/point-to-point devices.
We were using a special key "0" for all loopback and point-to-point
device neigh lookups under ipv4, but we wouldn't use that special
key for the neigh creation.
So basically we'd make a new neigh at each and every lookup :-)
This special case to use only one neigh for these device types
is of dubious value, so just remove it entirely.
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Leo Alterman [Fri, 20 Jul 2012 21:51:07 +0000 (14:51 -0700)]
openvswitch: Fix typo in documentation.
Signed-off-by: Leo Alterman <lalterman@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
Ben Pfaff [Fri, 20 Jul 2012 21:47:54 +0000 (14:47 -0700)]
openvswitch: Check gso_type for correct sk_buff in queue_gso_packets().
At the point where it was used, skb_shinfo(skb)->gso_type referred to a
post-GSO sk_buff. Thus, it would always be 0. We want to know the pre-GSO
gso_type, so we need to obtain it before segmenting.
Before this change, the kernel would pass inconsistent data to userspace:
packets for UDP fragments with nonzero offset would be passed along with
flow keys that indicate a zero offset (that is, the flow key for "later"
fragments claimed to be "first" fragments). This inconsistency tended
to confuse Open vSwitch userspace, causing it to log messages about
"failed to flow_del" the flows with "later" fragments.
Signed-off-by: Ben Pfaff <blp@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
Pravin B Shelar [Fri, 20 Jul 2012 21:46:29 +0000 (14:46 -0700)]
openvswitch: Check currect return value from skb_gso_segment()
Fix return check typo.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
Cloud Ren [Thu, 19 Jul 2012 17:01:58 +0000 (17:01 +0000)]
atl1c: fix issue of io access mode for AR8152 v2.1
When io access mode is enabled by BOOTROM or BIOS for AR8152 v2.1,
the register can't be read/write by memory access mode.
Clearing Bit 8 of Register 0x21c could fixed the issue.
Signed-off-by: Cloud Ren <cjren@qca.qualcomm.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: xiong <xiong@qca.qualcomm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mikulas Patocka [Thu, 19 Jul 2012 06:13:36 +0000 (06:13 +0000)]
tun: fix a crash bug and a memory leak
This patch fixes a crash
tun_chr_close -> netdev_run_todo -> tun_free_netdev -> sk_release_kernel ->
sock_release -> iput(SOCK_INODE(sock))
introduced by commit
1ab5ecb90cb6a3df1476e052f76a6e8f6511cb3d
The problem is that this socket is embedded in struct tun_struct, it has
no inode, iput is called on invalid inode, which modifies invalid memory
and optionally causes a crash.
sock_release also decrements sockets_in_use, this causes a bug that
"sockets: used" field in /proc/*/net/sockstat keeps on decreasing when
creating and closing tun devices.
This patch introduces a flag SOCK_EXTERNALLY_ALLOCATED that instructs
sock_release to not free the inode and not decrement sockets_in_use,
fixing both memory corruption and sockets_in_use underflow.
It should be backported to 3.3 an 3.4 stabke.
Signed-off-by: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
Cc: stable@kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Anastasov [Fri, 20 Jul 2012 09:02:08 +0000 (12:02 +0300)]
ipv4: show pmtu in route list
Override the metrics with rt_pmtu
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 20 Jul 2012 18:11:59 +0000 (11:11 -0700)]
Merge branch 'master' of git://git./linux/kernel/git/jkirsher/net-next
Jerr Kirsher says:
====================
This series contains updates to ixgbe.
...
Alexander Duyck (9):
ixgbe: Use VMDq offset to indicate the default pool
ixgbe: Fix memory leak when SR-IOV VFs are direct assigned
ixgbe: Drop references to deprecated pci_ DMA api and instead use
dma_ API
ixgbe: Cleanup configuration of FCoE registers
ixgbe: Merge all FCoE percpu values into a single structure
ixgbe: Make FCoE allocation and configuration closer to how rings
work
ixgbe: Correctly set SAN MAC RAR pool to default pool of PF
ixgbe: Only enable anti-spoof on VF pools
ixgbe: Enable FCoE FSO and CRC offloads based on CAPABLE instead of
ENABLED flag
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 20 Jul 2012 18:07:37 +0000 (11:07 -0700)]
Merge branch 'team_multiq'
Jiri Pirko says:
====================
This patchset represents the way I walked when I was adding multiqueue
support for team driver.
Jiri Pirko (6):
net: honour netif_set_real_num_tx_queues() retval
rtnl: allow to specify different num for rx and tx queue count
rtnl: allow to specify number of rx and tx queues on device creation
net: rename bond_queue_mapping to slave_dev_queue_mapping
bond_sysfs: use ream_num_tx_queues rather than params.tx_queue
team: add multiqueue support
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Fri, 20 Jul 2012 02:28:51 +0000 (02:28 +0000)]
team: add multiqueue support
Largely copied from bonding code.
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Fri, 20 Jul 2012 02:28:50 +0000 (02:28 +0000)]
bond_sysfs: use real_num_tx_queues rather than params.tx_queue
Since now number of tx queues can be specified during bond instance
creation and therefore it may differ from params.tx_queues, use rather
real_num_tx_queues for boundary check.
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Fri, 20 Jul 2012 02:28:49 +0000 (02:28 +0000)]
net: rename bond_queue_mapping to slave_dev_queue_mapping
As this is going to be used not only by bonding.
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Fri, 20 Jul 2012 02:28:48 +0000 (02:28 +0000)]
rtnl: allow to specify number of rx and tx queues on device creation
This patch introduces IFLA_NUM_TX_QUEUES and IFLA_NUM_RX_QUEUES by
which userspace can set number of rx and/or tx queues to be allocated
for newly created netdevice.
This overrides ops->get_num_[tr]x_queues()
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Fri, 20 Jul 2012 02:28:47 +0000 (02:28 +0000)]
rtnl: allow to specify different num for rx and tx queue count
Also cut out unused function parameters and possible err in return
value.
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Fri, 20 Jul 2012 02:28:46 +0000 (02:28 +0000)]
net: honour netif_set_real_num_tx_queues() retval
In netif_copy_real_num_queues() the return value of
netif_set_real_num_tx_queues() should be checked.
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Fri, 20 Jul 2012 05:45:50 +0000 (05:45 +0000)]
tcp: improve latencies of timer triggered events
Modern TCP stack highly depends on tcp_write_timer() having a small
latency, but current implementation doesn't exactly meet the
expectations.
When a timer fires but finds the socket is owned by the user, it rearms
itself for an additional delay hoping next run will be more
successful.
tcp_write_timer() for example uses a 50ms delay for next try, and it
defeats many attempts to get predictable TCP behavior in term of
latencies.
Use the recently introduced tcp_release_cb(), so that the user owning
the socket will call various handlers right before socket release.
This will permit us to post a followup patch to address the
tcp_tso_should_defer() syndrome (some deferred packets have to wait
RTO timer to be transmitted, while cwnd should allow us to send them
sooner)
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Cc: H.K. Jerry Chu <hkchu@google.com>
Cc: John Heffner <johnwheffner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Fri, 20 Jul 2012 05:02:33 +0000 (05:02 +0000)]
tcp: fix ABC in tcp_slow_start()
When/if sysctl_tcp_abc > 1, we expect to increase cwnd by 2 if the
received ACK acknowledges more than 2*MSS bytes, in tcp_slow_start()
Problem is this RFC 3465 statement is not correctly coded, as
the while () loop increases snd_cwnd one by one.
Add a new variable to avoid this off-by one error.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Cc: John Heffner <johnwheffner@gmail.com>
Cc: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 19 Jul 2012 23:02:34 +0000 (23:02 +0000)]
tcp: use hash_32() in tcp_metrics
Fix a missing roundup_pow_of_two(), since tcpmhash_entries is not
guaranteed to be a power of two.
Uses hash_32() instead of custom hash.
tcpmhash_entries should be an unsigned int.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vijay Subramanian [Thu, 19 Jul 2012 21:32:18 +0000 (21:32 +0000)]
tcp: Return bool instead of int where appropriate
Applied to a set of static inline functions in tcp_input.c
Signed-off-by: Vijay Subramanian <subramanian.vijay@gmail.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Mason [Thu, 19 Jul 2012 21:02:09 +0000 (21:02 +0000)]
ixgbe: use PCI_VENDOR_ID_INTEL
Use PCI_VENDOR_ID_INTEL from pci_ids.h instead of creating its own
vendor ID #define.
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Bruce Allan <bruce.w.allan@intel.com>
Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Cc: Greg Rose <gregory.v.rose@intel.com>
Cc: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Cc: Alex Duyck <alexander.h.duyck@intel.com>
Cc: John Ronciak <john.ronciak@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Mason [Thu, 19 Jul 2012 21:02:08 +0000 (21:02 +0000)]
ixgb: use PCI_VENDOR_ID_INTEL
Use PCI_VENDOR_ID_INTEL from pci_ids.h instead of creating its own
vendor ID #define.
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Bruce Allan <bruce.w.allan@intel.com>
Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Cc: Greg Rose <gregory.v.rose@intel.com>
Cc: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Cc: Alex Duyck <alexander.h.duyck@intel.com>
Cc: John Ronciak <john.ronciak@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Mason [Thu, 19 Jul 2012 20:11:13 +0000 (20:11 +0000)]
myri10ge: update MAINTAINERS
Remove myself from myri10ge MAINTAINERS list
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 20 Jul 2012 17:56:03 +0000 (10:56 -0700)]
Merge branch 'for-davem' of git://gitorious.org/linux-can/linux-can-next
Marc Kleine-Budde says:
====================
the fifth pull request for upcoming v3.6 net-next cleans up and
improves the janz-ican3 driver (6 patches by Ira W. Snyder, one by me).
A patch by Steffen Trumtrar adds imx53 support to the flexcan driver.
And another patch by me, which marks the bit timing constant in the CAN
drivers as "const".
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John W. Linville [Fri, 20 Jul 2012 16:30:48 +0000 (12:30 -0400)]
Merge branch 'master' of git://git./linux/kernel/git/linville/wireless-next into for-davem
Ira W. Snyder [Wed, 18 Jul 2012 22:33:18 +0000 (15:33 -0700)]
can: janz-ican3: add support for one shot mode
The Janz VMOD-ICAN3 hardware has support for one shot packet
transmission. This means that a packet will be attempted to be sent
once, with no automatic retries.
The SocketCAN core has a controller-wide setting for this mode:
CAN_CTRLMODE_ONE_SHOT. The Janz VMOD-ICAN3 hardware supports this flag
on a per-packet level, but the SocketCAN core does not.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Ira W. Snyder [Wed, 18 Jul 2012 22:33:17 +0000 (15:33 -0700)]
can: janz-ican3: avoid firmware lockup caused by infinite bus error quota
If the bus error quota is set to infinite and the host CPU cannot keep
up, the Janz VMOD-ICAN3 firmware will stop responding to control
messages until the controller is reset.
The firmware will automatically stop sending bus error messages when the
quota is reached, and will only resume sending bus error messages when
the quota is re-set to a positive value.
This limitation is worked around by setting the bus error quota to one
message, and then re-setting the quota to one message every time a bus
error message is received. By doing this, the firmware never stops
responding to control messages. The CAN bus can be reset without a
hard-reset of the controller card.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Ira W. Snyder [Thu, 19 Jul 2012 15:54:42 +0000 (08:54 -0700)]
can: janz-ican3: fix support for CAN_RAW_RECV_OWN_MSGS
The Janz VMOD-ICAN3 firmware does not support any sort of TX-done
notification or interrupt. The driver previously used the hardware
loopback to attempt to work around this deficiency, but this caused all
sockets to receive all messages, even if CAN_RAW_RECV_OWN_MSGS is off.
Using the new function ican3_cmp_echo_skb(), we can drop the loopback
messages and return the original skbs. This fixes the issues with
CAN_RAW_RECV_OWN_MSGS.
A private skb queue is used to store the echo skbs. This avoids the need
for any index management.
Due to a lack of TX-error interrupts, bus errors are permanently
enabled, and are used as a TX-error notification. This is used to drop
an echo skb when transmission fails. Bus error packets are not generated
if the user has not enabled bus error reporting.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Ira W. Snyder [Thu, 19 Jul 2012 15:54:18 +0000 (08:54 -0700)]
can: janz-ican3: fix error and byte counters
The error and byte counter statistics were being incremented
incorrectly. For example, a TX error would be counted both in tx_errors
and rx_errors.
This corrects the problem so that tx_errors and rx_errors are only
incremented for errors caused by packets sent to the bus. Error packets
generated by the driver are not counted.
The byte counters are only increased for packets which are actually
transmitted or received from the bus. Error packets generated by the
driver are not counted.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Marc Kleine-Budde [Thu, 21 Oct 2010 16:39:26 +0000 (18:39 +0200)]
can: janz-ican3: cleanup of ican3_to_can_frame and can_frame_to_ican3
This patch cleans up the ICAN3 to Linux CAN frame and vice versa
conversion functions:
- RX: Use get_can_dlc() to limit the dlc value.
- RX+TX: Don't copy the whole frame, only copy the amount of bytes
specified in cf->can_dlc.
Acked-by: Ira W. Snyder <iws@ovro.caltech.edu>
Tested-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Ira W. Snyder [Wed, 18 Jul 2012 22:33:14 +0000 (15:33 -0700)]
can: janz-ican3: drop invalid skbs
The commit which added the janz-ican3 driver and commit
3ccd4c61 "can: Unify droping of invalid tx skbs and netdev stats" were
committed into mainline Linux during the same merge window.
Therefore, the addition of this code to the janz-ican3 driver was
forgotten. This patch adds the expected code.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Ira W. Snyder [Wed, 18 Jul 2012 22:33:13 +0000 (15:33 -0700)]
can: janz-ican3: remove dead code
The code which used this variable was removed during review, before the
driver was added to mainline Linux. It is now dead code, and can be
removed.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Steffen Trumtrar [Tue, 17 Jul 2012 14:14:34 +0000 (16:14 +0200)]
can: flexcan: add 2nd clock to support imx53 and newer
This patch adds support for a second clock to the flexcan driver. On
modern freescale ARM cores like the imx53 and imx6q two clocks ("ipg"
and "per") must be enabled in order to access the CAN core.
In the original driver, the clock was requested without specifying the
connection id, further all mainline ARM archs with flexcan support
(imx28, imx25, imx35) register their flexcan clock without a
connection id, too.
This patch first renames the existing clk variable to clk_ipg and
converts it to devm for easier error handling. The connection id "ipg"
is added to the devm_clk_get() call. Then a second clock "per" is
requested. As all archs don't specify a connection id, both clk_get
return the same clock. This ensures compatibility to existing flexcan
support and adds support for imx53 at the same time.
After this patch hits mainline, the archs may give their existing
flexcan clock the "ipg" connection id and implement a dummy "per"
clock.
This patch has been tested on imx28 (unmodified clk tree) and on imx53
with a seperate "ipg" and "per" clock.
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Acked-by: Hui Wang <jason77.wang@gmail.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Marc Kleine-Budde [Mon, 16 Jul 2012 10:58:31 +0000 (12:58 +0200)]
can: mark bittiming_const pointer in struct can_priv as const
This patch marks the bittiming_const pointer as in the struct can_pric as
"const". This allows us to mark the struct can_bittiming_const in the CAN
drivers as "const", too.
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Alexander Duyck [Fri, 25 May 2012 06:38:18 +0000 (06:38 +0000)]
ixgbe: Enable FCoE FSO and CRC offloads based on CAPABLE instead of ENABLED flag
Instead of only setting the FCOE segmentation offload and CRC offload flags
if we enable FCoE, we could just set them always since there are no
modifications needed to the hardware or adapter FCoE structure in order to
use these features.
The advantage to this is that if FCoE enablement fails, for example because
SR-IOV was enabled on 82599, we will still have use of the FCoE
segmentation offload and Tx/Rx CRC offloads which should still help to
improve the FCoE performance.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 5 May 2012 05:32:58 +0000 (05:32 +0000)]
ixgbe: Only enable anti-spoof on VF pools
The current logic is enabling anti-spoof on all pools and then clearing
anti-spoof on just the first PF pool. The correct approach is to only set
anti-spoof on the VF pools and to leave all of the PF pools unchecked.
This allows for items such as FCoE to use adjacent pools within the PF for
transmit and receive queues without the traffic being blocked by this
security feature.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 5 May 2012 05:32:52 +0000 (05:32 +0000)]
ixgbe: Correctly set SAN MAC RAR pool to default pool of PF
This change corrects an issue in which an FCoE enabled adapter was always
setting the FCoE SAN MAC MPSAR register to 0x1. This results in the first
VF being assigned the SAN MAC address in the case of SR-IOV and as such is
incorrect. To resolve this I am adding a new function that will update the
SAN MAC pool address after reset.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 5 May 2012 05:32:47 +0000 (05:32 +0000)]
ixgbe: Make FCoE allocation and configuration closer to how rings work
This patch changes the behavior of the FCoE configuration so that it is
much closer to how the main body of the ixgbe driver works for ring
allocation.
The first piece is the ixgbe_fcoe_ddp_enable/disable calls. These allocate
the percpu values and if successful set the fcoe_ddp_xid value indicating
that we can support DDP.
The next piece is the ixgbe_setup/free_ddp_resources calls. These are
called on open/close and will allocate and free the DMA pools.
Finally ixgbe_configure_fcoe is now just register configuration. It can go
through and enable the registers for the FCoE redirection offload, and FIP
configuration without any interference from the DDP pool allocation.
The net result of all this is two fold. First it adds a certain amount of
exception handling. So for example if ixgbe_setup_fcoe_resources fails we
will actually generate an error in open and refuse to bring up the
interface.
Secondly it provides a much more graceful failure case than the previous
model which would skip setting up the registers for FCoE on failure to
allocate DDP resources leaving no Rx functionality enabled instead of just
disabling DDP.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 5 May 2012 17:14:28 +0000 (17:14 +0000)]
ixgbe: Merge all FCoE percpu values into a single structure
This change merges the 2 statistics values for noddp and noddp_ext_buff
and the dma_pool into a single structure that can be allocated per CPU.
The advantages to this are several fold. First we only need to do one
alloc_percpu call now instead of 3, so that means less overhead for
handling memory allocation failures. Secondly in the case of
ixgbe_fcoe_ddp_setup we only need to call get_cpu once which makes things a
bit cleaner since we can drop a put_cpu() from the exception path.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 5 May 2012 05:32:37 +0000 (05:32 +0000)]
ixgbe: Cleanup configuration of FCoE registers
This change makes it so we always use the FCoE redirection table. We just
set all 8 entries to the same value in the case of only having one queue
for FCoE.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 5 May 2012 05:32:32 +0000 (05:32 +0000)]
ixgbe: Drop references to deprecated pci_ DMA api and instead use dma_ API
The networking side of the code had already been updated to use dma_ calls
instead of the old pci_ calls. However it looks like the FCoE code was
never updated. This change goes through and moves everything from the pci
APIs to the dma APIs.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 5 May 2012 05:32:26 +0000 (05:32 +0000)]
ixgbe: Fix memory leak when SR-IOV VFs are direct assigned
The VF driver had a memory leak that would occur if VFs were assigned to a
guest. The amount of leak would vary with the number of VFs but could max
out at about 14K per PF. To reproduce the leak all you would need to do is
enable all the VFs on the first PF. Then start a loop of loading and
unloading the driver with max_vfs=63 for the first port.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 5 May 2012 05:32:21 +0000 (05:32 +0000)]
ixgbe: Use VMDq offset to indicate the default pool
This change makes it so that we can use the VMDq ring feature offset value
to determine the default pool instead of using num_vfs. The reason for
this change is to avoid issues should we fail to allocate vfinfo but have
pre-existing VFs. What should happen in this case is that num_vfs will go
to 0, but the VMDq offset will contain the location of the first PF pool.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Sibai Li <Sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
David S. Miller [Thu, 19 Jul 2012 20:39:27 +0000 (13:39 -0700)]
Merge branch 'net' of git://git./linux/kernel/git/cmetcalf/linux-tile
Julian Anastasov [Thu, 19 Jul 2012 20:02:45 +0000 (23:02 +0300)]
ipv4: Fix again the time difference calculation
Fix again the diff value in rt_bind_exception
after collision of two latest patches, my original commit
actually fixed the same problem.
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 19 Jul 2012 18:17:30 +0000 (11:17 -0700)]
Merge git://git./linux/kernel/git/davem/net
Conflicts:
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
Yuchung Cheng [Thu, 19 Jul 2012 06:43:11 +0000 (06:43 +0000)]
net-tcp: Fast Open client - cookie-less mode
In trusted networks, e.g., intranet, data-center, the client does not
need to use Fast Open cookie to mitigate DoS attacks. In cookie-less
mode, sendmsg() with MSG_FASTOPEN flag will send SYN-data regardless
of cookie availability.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 19 Jul 2012 06:43:10 +0000 (06:43 +0000)]
net-tcp: Fast Open client - detecting SYN-data drops
On paths with firewalls dropping SYN with data or experimental TCP options,
Fast Open connections will have experience SYN timeout and bad performance.
The solution is to track such incidents in the cookie cache and disables
Fast Open temporarily.
Since only the original SYN includes data and/or Fast Open option, the
SYN-ACK has some tell-tale sign (tcp_rcv_fastopen_synack()) to detect
such drops. If a path has recurring Fast Open SYN drops, Fast Open is
disabled for 2^(recurring_losses) minutes starting from four minutes up to
roughly one and half day. sendmsg with MSG_FASTOPEN flag will succeed but
it behaves as connect() then write().
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 19 Jul 2012 06:43:09 +0000 (06:43 +0000)]
net-tcp: Fast Open client - sendmsg(MSG_FASTOPEN)
sendmsg() (or sendto()) with MSG_FASTOPEN is a combo of connect(2)
and write(2). The application should replace connect() with it to
send data in the opening SYN packet.
For blocking socket, sendmsg() blocks until all the data are buffered
locally and the handshake is completed like connect() call. It
returns similar errno like connect() if the TCP handshake fails.
For non-blocking socket, it returns the number of bytes queued (and
transmitted in the SYN-data packet) if cookie is available. If cookie
is not available, it transmits a data-less SYN packet with Fast Open
cookie request option and returns -EINPROGRESS like connect().
Using MSG_FASTOPEN on connecting or connected socket will result in
simlar errno like repeating connect() calls. Therefore the application
should only use this flag on new sockets.
The buffer size of sendmsg() is independent of the MSS of the connection.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 19 Jul 2012 06:43:08 +0000 (06:43 +0000)]
net-tcp: Fast Open client - receiving SYN-ACK
On receiving the SYN-ACK after SYN-data, the client needs to
a) update the cached MSS and cookie (if included in SYN-ACK)
b) retransmit the data not yet acknowledged by the SYN-ACK in the final ACK of
the handshake.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 19 Jul 2012 06:43:07 +0000 (06:43 +0000)]
net-tcp: Fast Open client - sending SYN-data
This patch implements sending SYN-data in tcp_connect(). The data is
from tcp_sendmsg() with flag MSG_FASTOPEN (implemented in a later patch).
The length of the cookie in tcp_fastopen_req, init'd to 0, controls the
type of the SYN. If the cookie is not cached (len==0), the host sends
data-less SYN with Fast Open cookie request option to solicit a cookie
from the remote. If cookie is not available (len > 0), the host sends
a SYN-data with Fast Open cookie option. If cookie length is negative,
the SYN will not include any Fast Open option (for fall back operations).
To deal with middleboxes that may drop SYN with data or experimental TCP
option, the SYN-data is only sent once. SYN retransmits do not include
data or Fast Open options. The connection will fall back to regular TCP
handshake.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 19 Jul 2012 06:43:06 +0000 (06:43 +0000)]
net-tcp: Fast Open client - cookie cache
With help from Eric Dumazet, add Fast Open metrics in tcp metrics cache.
The basic ones are MSS and the cookies. Later patch will cache more to
handle unfriendly middleboxes.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 19 Jul 2012 06:43:05 +0000 (06:43 +0000)]
net-tcp: Fast Open base
This patch impelements the common code for both the client and server.
1. TCP Fast Open option processing. Since Fast Open does not have an
option number assigned by IANA yet, it shares the experiment option
code 254 by implementing draft-ietf-tcpm-experimental-options
with a 16 bits magic number 0xF989. This enables global experiments
without clashing the scarce(2) experimental options available for TCP.
When the draft status becomes standard (maybe), the client should
switch to the new option number assigned while the server supports
both numbers for transistion.
2. The new sysctl tcp_fastopen
3. A place holder init function
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thadeu Lima de Souza Cascardo [Mon, 16 Jul 2012 07:01:53 +0000 (07:01 +0000)]
mlx4_en: map entire pages to increase throughput
In its receive path, mlx4_en driver maps each page chunk that it pushes
to the hardware and unmaps it when pushing it up the stack. This limits
throughput to about 3Gbps on a Power7 8-core machine.
One solution is to map the entire allocated page at once. However, this
requires that we keep track of every page fragment we give to a
descriptor. We also need to work with the discipline that all fragments will
be released (in the sense that it will not be reused by the driver
anymore) in the order they are allocated to the driver.
This requires that we don't reuse any fragments, every single one of
them must be reallocated. We do that by releasing all the fragments that
are processed and only after finished processing the descriptors, we
start the refill.
We also must somehow guarantee that we either refill all fragments in a
descriptor or none at all, without resorting to giving up a page
fragment that we would have already given. Otherwise, we would break the
discipline of only releasing the fragments in the order they were
allocated.
This has passed page allocation fault injections (restricted to the
driver by using required-start and required-end) and device hotplug
while 16 TCP streams were able to deliver more than 9Gbps.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michal Schmidt [Thu, 19 Jul 2012 07:04:45 +0000 (07:04 +0000)]
sfc: initialize dynamic sysfs attributes for lockdep
Dynamically allocated sysfs attributes must be initialized using
sysfs_attr_init(), otherwise lockdep complains:
BUG: key <address> not in .data!
Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
Acked-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
stephen hemminger [Thu, 19 Jul 2012 07:01:07 +0000 (07:01 +0000)]
bridge: update documentation references
Update the references to bridge utilities and web pages
to current locations
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bjørn Mork [Thu, 19 Jul 2012 06:28:40 +0000 (06:28 +0000)]
net: e100: ucode is optional in some cases
commit
9ac32e1b firmware: convert e100 driver to request_firmware()
did a straight conversion of the in-driver ucode to external
files. This introduced the possibility of the driver failing
to enable an interface due to missing ucode. There was no
evaluation of the importance of the ucode at the time.
Based on comments in earlier versions of this driver, and in
the source code for the FreeBSD fxp driver, we can assume that
the ucode implements the "CPU Cycle Saver" feature on supported
adapters. Although generally wanted, this is an optional
feature. The ucode source is not available, preventing it from
being included in free distributions. This creates unnecessary
problems for the end users. Doing a network install based on a
free distribution installer requires the user to download and
insert the ucode into the installer.
Making the ucode optional when possible improves the user
experience and driver usability.
The ucode for some adapters include a bugfix, making it
essential. We continue to fail for these adapters unless the
ucode is available.
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christian Riesch [Thu, 19 Jul 2012 02:02:19 +0000 (02:02 +0000)]
asix: AX88172A driver depends on phylib
Since commit
16626b0cc3d5afe250850f96759b241f8a403b52 the asix
driver depends on the phylib. Select phylib when the asix driver is
selected.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Cc: kernel-janitors@vger.kernel.org
Signed-off-by: Christian Riesch <christian.riesch@omicron.at>
Tested-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christian Riesch [Thu, 19 Jul 2012 00:23:07 +0000 (00:23 +0000)]
asix: Add support for programming the EEPROM
This patch adds the asix_set_eeprom() function to provide support for
programming the configuration EEPROM via ethtool.
Signed-off-by: Christian Riesch <christian.riesch@omicron.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christian Riesch [Thu, 19 Jul 2012 00:23:06 +0000 (00:23 +0000)]
asix: Rework reading from EEPROM
The current code for reading the EEPROM via ethtool in the asix
driver has a few issues. It cannot handle odd length values
(accesses must be aligned at 16 bit boundaries) and interprets the
offset provided by ethtool as 16 bit word offset instead as byte offset.
The new code for asix_get_eeprom() introduced by this patch is
modeled after the code in
drivers/net/ethernet/atheros/atl1e/atl1e_ethtool.c
and provides read access to the entire EEPROM with arbitrary
offsets and lengths.
Signed-off-by: Christian Riesch <christian.riesch@omicron.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dinh Nguyen [Wed, 18 Jul 2012 13:28:26 +0000 (13:28 +0000)]
net: stmmac: Add ip version to dts bindings
Because there are multiple variants to the stmmac/dwmac driver, the
dts bindings should be updated to include version of the IP used.
Signed-off-by: Dinh Nguyen <dinguyen@altera.com>
Acked-by: Stefan Roese <sr@denx.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
brenohl@br.ibm.com [Wed, 18 Jul 2012 09:29:08 +0000 (09:29 +0000)]
cxgb3: Set vlan_feature on net_device
cxgb3 interface has a bad performance when VLAN is set. On my current
setup, a PowerLinux 7R2, I am able to get around 7 Gbps on a TCP_STREAM
(8 instances, 4k message).
With this patch, I am able to reach 9.5 Gbps.
Signed-off-by: Breno Leitao <brenohl@br.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
stephen hemminger [Wed, 18 Jul 2012 09:09:48 +0000 (09:09 +0000)]
ipx: move peII functions
The Ethernet II wrapper is only used by IPX protocol, may have once
been used by Appletalk but not currently. Therefore it makes sense to
move it to the IPX dust bin and drop the exports.
Build tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 19 Jul 2012 17:43:03 +0000 (10:43 -0700)]
net: Fix warnings in dst_ops.h
include/net/dst_ops.h:28:20: warning: ‘struct sock’ declared inside parameter list
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 19 Jul 2012 07:34:03 +0000 (07:34 +0000)]
ipv4: tcp: remove per net tcp_sock
tcp_v4_send_reset() and tcp_v4_send_ack() use a single socket
per network namespace.
This leads to bad behavior on multiqueue NICS, because many cpus
contend for the socket lock and once socket lock is acquired, extra
false sharing on various socket fields slow down the operations.
To better resist to attacks, we use a percpu socket. Each cpu can
run without contention, using appropriate memory (local node)
Additional features :
1) We also mirror the queue_mapping of the incoming skb, so that
answers use the same queue if possible.
2) Setting SOCK_USE_WRITE_QUEUE socket flag speedup sock_wfree()
3) We now limit the number of in-flight RST/ACK [1] packets
per cpu, instead of per namespace, and we honor the sysctl_wmem_default
limit dynamically. (Prior to this patch, sysctl_wmem_default value was
copied at boot time, so any further change would not affect tcp_sock
limit)
[1] These packets are only generated when no socket was matched for
the incoming packet.
Reported-by: Bill Sommerfeld <wsommerfeld@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Anastasov [Wed, 18 Jul 2012 10:15:35 +0000 (10:15 +0000)]
ipv4: use seqlock for nh_exceptions
Use global seqlock for the nh_exceptions. Call
fnhe_oldest with the right hash chain. Correct the diff
value for dst_set_expires.
v2: after suggestions from Eric Dumazet:
* get rid of spin lock fnhe_lock, rearrange update_or_create_fnhe
* continue daddr search in rt_bind_exception
v3:
* remove the daddr check before seqlock in rt_bind_exception
* restart lookup in rt_bind_exception on detected seqlock change,
as suggested by David Miller
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
Duan Jiong [Thu, 19 Jul 2012 16:36:34 +0000 (12:36 -0400)]
libertas: firmware.c: remove duplicated include
Signed-off-by: Duan Jiong <djduanjiong@gmail.com>
John W. Linville [Thu, 19 Jul 2012 16:35:00 +0000 (12:35 -0400)]
Merge branch 'for-john' of git://git./linux/kernel/git/jberg/mac80211-next
David S. Miller [Thu, 19 Jul 2012 15:46:59 +0000 (08:46 -0700)]
ipv4: Fix time difference calculation in rt_bind_exception().
Reported-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Wed, 18 Jul 2012 22:33:52 +0000 (22:33 +0000)]
net/mlx4_en: Add accelerated RFS support
Use RFS infrastructure and flow steering in HW to keep CPU
affinity of rx interrupts and application per TCP stream.
A flow steering filter is added to the HW whenever the RFS
ndo callback is invoked by core networking code.
Because the invocation takes place in interrupt context, the
actual setup of HW is done using workqueue. Whenever new filter
is added, the driver checks for expiry of existing filters.
Since there's window in time between the point where the core
RFS code invoked the ndo callback, to the point where the HW
is configured from the workqueue context, the 2nd, 3rd etc
packets from that stream will cause the net core to invoke
the callback again and again.
To prevent inefficient/double configuration of the HW, the filters
are kept in a database which is indexed using hash function to enable
fast access.
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Wed, 18 Jul 2012 22:33:51 +0000 (22:33 +0000)]
{NET,IB}/mlx4: Add rmap support to mlx4_assign_eq
Enable callers of mlx4_assign_eq to supply a pointer to cpu_rmap.
If supplied, the assigned IRQ is tracked using rmap infrastructure.
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Wed, 18 Jul 2012 22:33:50 +0000 (22:33 +0000)]
net/rps: Protect cpu_rmap.h from double inclusion
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Acked-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Wed, 18 Jul 2012 22:33:49 +0000 (22:33 +0000)]
net/mlx4: Move MAC_MASK to a common place
Define this macro is one common place instead of duplicating it over the code
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Anastasov [Wed, 18 Jul 2012 21:35:03 +0000 (21:35 +0000)]
ipv4: fix address selection in fib_compute_spec_dst
ip_options_compile can be called for forwarded packets,
make sure the specific-destionation address is a local one as
specified in RFC 1812, 4.2.2.2 Addresses in Options
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Anastasov [Wed, 18 Jul 2012 21:34:24 +0000 (21:34 +0000)]
ipv4: optimize fib_compute_spec_dst call in ip_options_echo
Move fib_compute_spec_dst at the only place where it
is needed.
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Thu, 19 Jul 2012 15:27:13 +0000 (08:27 -0700)]
Merge tag 'md-3.5-fixes' of git://neil.brown.name/md
Pull three md bugfixes from NeilBrown:
"One of the bugs was introduced in 3.5-rc1. Others have been there for
longer."
* tag 'md-3.5-fixes' of git://neil.brown.name/md:
md/raid1: close some possible races on write errors during resync
md: avoid crash when stopping md array races with closing other open fds.
md: fix bug in handling of new_data_offset
Linus Torvalds [Thu, 19 Jul 2012 15:21:13 +0000 (08:21 -0700)]
Merge git://git./linux/kernel/git/davem/net
Pull networking changes from David Miller:
"Ok, we should be good to go now"
1) We have to statically initialize the init_net device list head rather
than do so in an initcall, otherwise netprio_cgroup crashes if it's
built statically rather than modular (Mark D. Rustad)
2) Fix SKB null oopser in CIPSO ipv4 option processing (Paul Moore)
3) Qlogic maintainers update (Anirban Chakraborty)
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
net: Statically initialize init_net.dev_base_head
MAINTAINERS: Changes in qlcnic and qlge maintainers list
cipso: don't follow a NULL pointer when setsockopt() is called
Linus Torvalds [Thu, 19 Jul 2012 15:15:55 +0000 (08:15 -0700)]
Merge branch 'upstream-fixes' of git://git./linux/kernel/git/jikos/hid
Pull HID update from Jiri Kosina:
"A final round of changes for HID for 3.5: just device ID additions."
* 'upstream-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid:
HID: hid-multitouch: add support for Zytronic panels
HID: add Sennheiser BTD500USB device support
HID: add battery quirk for Apple Wireless ANSI
Ezequiel Garcia [Wed, 18 Jul 2012 13:05:26 +0000 (10:05 -0300)]
cx25821: Remove bad strcpy to read-only char*
The strcpy was being used to set the name of the board. Since the
destination char* was read-only and the name is set statically at
compile time; this was both wrong and redundant.
The type of char* is changed to const char* to prevent future errors.
Reported-by: Radek Masin <radek@masin.eu>
Signed-off-by: Ezequiel Garcia <elezegarcia@gmail.com>
[ Taking directly due to vacations - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Benjamin Tissoires [Tue, 19 Jun 2012 12:39:54 +0000 (14:39 +0200)]
HID: hid-multitouch: add support for Zytronic panels
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@enac.fr>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
NeilBrown [Thu, 19 Jul 2012 05:59:18 +0000 (15:59 +1000)]
md/raid1: close some possible races on write errors during resync
commit
4367af556133723d0f443e14ca8170d9447317cb
md/raid1: clear bad-block record when write succeeds.
Added a 'reschedule_retry' call possibility at the end of
end_sync_write, but didn't add matching code at the end of
sync_request_write. So if the writes complete very quickly, or
scheduling makes it seem that way, then we can miss rescheduling
the request and the resync could hang.
Also commit
73d5c38a9536142e062c35997b044e89166e063b
md: avoid races when stopping resync.
Fix a race condition in this same code in end_sync_write but didn't
make the change in sync_request_write.
This patch updates sync_request_write to fix both of those.
Patch is suitable for 3.1 and later kernels.
Reported-by: Alexander Lyakas <alex.bolshoy@gmail.com>
Original-version-by: Alexander Lyakas <alex.bolshoy@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Thu, 19 Jul 2012 05:59:18 +0000 (15:59 +1000)]
md: avoid crash when stopping md array races with closing other open fds.
md will refuse to stop an array if any other fd (or mounted fs) is
using it.
When any fs is unmounted of when the last open fd is closed all
pending IO will be flushed (e.g. sync_blockdev call in __blkdev_put)
so there will be no pending IO to worry about when the array is
stopped.
However in order to send the STOP_ARRAY ioctl to stop the array one
must first get and open fd on the block device.
If some fd is being used to write to the block device and it is closed
after mdadm open the block device, but before mdadm issues the
STOP_ARRAY ioctl, then there will be no last-close on the md device so
__blkdev_put will not call sync_blockdev.
If this happens, then IO can still be in-flight while md tears down
the array and bad things can happen (use-after-free and subsequent
havoc).
So in the case where do_md_stop is being called from an open file
descriptor, call sync_block after taking the mutex to ensure there
will be no new openers.
This is needed when setting a read-write device to read-only too.
Cc: stable@vger.kernel.org
Reported-by: majianpeng <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
NeilBrown [Thu, 19 Jul 2012 05:59:18 +0000 (15:59 +1000)]
md: fix bug in handling of new_data_offset
commit
c6563a8c38fde3c1c7fc925a10bde3ca20799301
md: add possibility to change data-offset for devices.
introduced a 'new_data_offset' attribute which should normally
be the same as 'data_offset', but can be explicitly set to a different
value to allow a reshape operation to move the data.
Unfortunately when the 'data_offset' is explicitly set through
sysfs, the new_data_offset is not also set, so the two would become
out-of-sync incorrectly.
One result of this is that trying to set the 'size' after the
'data_offset' would fail because it is not permitted to set the size
when the 'data_offset' and 'new_data_offset' are different - as that
can be confusing.
Consequently when mdadm tried to do this while assembling an IMSM
array it would fail.
This bug was introduced in 3.5-rc1.
Reported-by: Brian Downing <bdowning@lavos.net>
Bisected-by: Brian Downing <bdowning@lavos.net>
Tested-by: Brian Downing <bdowning@lavos.net>
Signed-off-by: NeilBrown <neilb@suse.de>
Linus Torvalds [Thu, 19 Jul 2012 01:40:38 +0000 (18:40 -0700)]
Merge git://git./linux/kernel/git/nab/target-pending
Pull target fixes from Nicholas Bellinger:
"This includes a bugfix from MDR to address a NULL pointer OOPs with
FCoE aborts, along with a WRITE_SAME emulation bugfix for NOLB=0
cases, and persistent reservation return cleanups from Roland.
All three patches are CC'ed to stable."
* git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending:
target: Fix range calculation in WRITE SAME emulation when num blocks == 0
target: Clean up returning errors in PR handling code
tcm_fc: Fix crash seen with aborts and large reads
Olaf Hering [Wed, 18 Jul 2012 21:12:04 +0000 (14:12 -0700)]
kexec: update URL of kexec homepage
The referenced html file does not exist anymore. Replace the URL with
the current project homepage.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yoichi Yuasa [Wed, 18 Jul 2012 21:12:01 +0000 (14:12 -0700)]
mips: fix bug.h build regression
Commit
377780887 ("bug.h: need linux/kernel.h for TAINT_WARN.") broke
all MIPS builds:
CC arch/mips/kernel/machine_kexec.o
include/linux/log2.h: In function '__ilog2_u32':
include/linux/log2.h:34:2: error: implicit declaration of function 'fls' [-Werror=implicit-function-declaration]
include/linux/log2.h: In function '__ilog2_u64':
include/linux/log2.h:42:2: error: implicit declaration of function 'fls64' [-Werror=implicit-function-declaration]
...
Signed-off-by: Yoichi Yuasa <yuasa@linux-mips.org>
Tested-by: John Crispin <blogic@openwrt.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Thu, 19 Jul 2012 01:15:46 +0000 (18:15 -0700)]
Make wait_for_device_probe() also do scsi_complete_async_scans()
Commit
a7a20d103994 ("sd: limit the scope of the async probe domain")
make the SCSI device probing run device discovery in it's own async
domain.
However, as a result, the partition detection was no longer synchronized
by async_synchronize_full() (which, despite the name, only synchronizes
the global async space, not all of them). Which in turn meant that
"wait_for_device_probe()" would not wait for the SCSI partitions to be
parsed.
And "wait_for_device_probe()" was what the boot time init code relied on
for mounting the root filesystem.
Now, most people never noticed this, because not only is it
timing-dependent, but modern distributions all use initrd. So the root
filesystem isn't actually on a disk at all. And then before they
actually mount the final disk filesystem, they will have loaded the
scsi-wait-scan module, which not only does the expected
wait_for_device_probe(), but also does scsi_complete_async_scans().
[ Side note: scsi_complete_async_scans() had also been partially broken,
but that was fixed in commit
43a8d39d0137 ("fix async probe
regression"), so that same commit
a7a20d103994 had actually broken
setups even if you used scsi-wait-scan explicitly ]
Solve this problem by just moving the scsi_complete_async_scans() call
into wait_for_device_probe(). Everybody who wants to wait for device
probing to finish really wants the SCSI probing to complete, so there's
no reason not to do this.
So now "wait_for_device_probe()" really does what the name implies, and
properly waits for device probing to finish. This also removes the now
unnecessary extra calls to scsi_complete_async_scans().
Reported-and-tested-by: Artem S. Tashkinov <t.artem@mailcity.com>
Cc: Dan Williams <dan.j.williams@gmail.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: James Bottomley <jbottomley@parallels.com>
Cc: Borislav Petkov <bp@amd64.org>
Cc: linux-scsi <linux-scsi@vger.kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Wed, 18 Jul 2012 20:42:44 +0000 (13:42 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/jmorris/linux-security
Pull SELinux regression fixes from James Morris.
Andrew Morton has a box that hit that open perms problem.
I also renamed the "epollwakeup" selinux name for the new capability to
be "block_suspend", to match the rename done by commit
d9914cf66181
("PM: Rename CAP_EPOLLWAKEUP to CAP_BLOCK_SUSPEND").
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
SELinux: do not check open perms if they are not known to policy
SELinux: include definition of new capabilities
Rustad, Mark D [Wed, 18 Jul 2012 09:06:07 +0000 (09:06 +0000)]
net: Statically initialize init_net.dev_base_head
This change eliminates an initialization-order hazard most
recently seen when netprio_cgroup is built into the kernel.
With thanks to Eric Dumazet for catching a bug.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Sat, 19 May 2012 01:10:50 +0000 (01:10 +0000)]
ixgbe: Cleanup holes in flags after removing several of them
This change is just meant to defragment the flags as there are several hole
that have been introduced since several features, or the flags for them,
have been removed.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 14 Jul 2012 05:42:36 +0000 (05:42 +0000)]
ixgbe: Retire RSS enabled and capable flags
All of our hardware supports RSS even if it is only for a single queue. So
instead of toting around the RSS enable flag I am updating the code so that
all devices are enabled and if we want to disable RSS it is indicated via
the RSS mask.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Sat, 14 Jul 2012 06:48:49 +0000 (06:48 +0000)]
ixgbe: Add support for SR-IOV w/ DCB or RSS
This change essentially makes it so that we can enable almost all of the
features all at once. This patch allows for the combination of SR-IOV,
DCB, and FCoE in the case of the x540. It also beefs up the SR-IOV by
adding support for RSS to the PF.
The testing matrix gets to be very complex for this patch as there are a
number of different features and subsets for queueing options. I tried to
narrow these down a bit by restricting the PF to only supporting 4TC DCB
when it is enabled in addition to SR-IOV.
Cc: Greg Rose <gregory.v.rose@intel.com>
Cc: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Fri, 18 May 2012 06:34:08 +0000 (06:34 +0000)]
ixgbe: Update configure virtualization to allow for multiple PF pools
This change allows all pools from the default pool forward to be enabled vi
ixgbe_configure_virtualization. This is needed as we are planning to use
queues belonging to adjacent pools for FCoE when SR-IOV and FCoE are both
enabled.
In addition this patch contains some minor formatting changes as there were
a few spots that seemed to be in need of some cleanup.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Thu, 19 Jul 2012 00:18:05 +0000 (00:18 +0000)]
ixgbevf: Fix multiple issues in ixgbevf_get/set_ringparam
In ixgbevf_get_ringparam we could run into a NULL pointer dereference
if the rings were not allocated when we attempted the call. To prevent
that we can just access the tx/rx_ring_count values instead of attempting
to access the rings to get the count.
This change corrects a memory leak and memory corruption in
ixgbevf_set_ringparam.
The memory leak was due to us not freeing the resources from the ring
before overwriting them. This change corrects the memory leak by making
certain to call ixgbe_free_tx/rx_resources on the rings prior to freeing
them.
The memory corruption was because we were replacing the rings but not
updating the q_vectors. It addresses the memory corruption by leaving the
rings in place and instead just copying the contents of the new rings into
the existing rings.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Fri, 11 May 2012 08:33:21 +0000 (08:33 +0000)]
ixgbevf: Consolidate Tx context descriptor creation code
There is a good bit of redundancy between the Tx checksum and segmentation
offloads. In order to reduce some of this I am moving the code for
creating a context descriptor into a separate function.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Fri, 11 May 2012 08:33:16 +0000 (08:33 +0000)]
ixgbevf: Add netdev to ring structure
This change adds the netdev to the ring structure. This allows for a
quicker transition from ring to netdev without having to go from ring to
adapter to netdev.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Fri, 11 May 2012 08:33:11 +0000 (08:33 +0000)]
ixgbevf: Do not rewind the Rx ring before bumping tail
The driver is going back one step from its' previous location before
bumping tail. This is incorrect. We should just be writing the value of
next_to_use into the tail register.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>