linux/Documentation/networking/ixgbe.txt
<<
>>
Prefs
   1Linux Base Driver for 10 Gigabit PCI Express Intel(R) Network Connection
   2========================================================================
   3
   4Intel Gigabit Linux driver.
   5Copyright(c) 1999 - 2010 Intel Corporation.
   6
   7Contents
   8========
   9
  10- Identifying Your Adapter
  11- Additional Configurations
  12- Performance Tuning
  13- Known Issues
  14- Support
  15
  16Identifying Your Adapter
  17========================
  18
  19The driver in this release is compatible with 82598 and 82599-based Intel
  20Network Connections.
  21
  22For more information on how to identify your adapter, go to the Adapter &
  23Driver ID Guide at:
  24
  25    http://support.intel.com/support/network/sb/CS-012904.htm
  26
  27SFP+ Devices with Pluggable Optics
  28----------------------------------
  29
  3082599-BASED ADAPTERS
  31
  32NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or
  33is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel
  34optics and/or the direct attach cables listed below.
  35
  36When 82599-based SFP+ devices are connected back to back, they should be set to
  37the same Speed setting via ethtool. Results may vary if you mix speed settings.
  3882598-based adapters support all passive direct attach cables that comply
  39with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
  40cables are not supported.
  41
  42Supplier    Type                                             Part Numbers
  43
  44SR Modules
  45Intel       DUAL RATE 1G/10G SFP+ SR (bailed)                FTLX8571D3BCV-IT
  46Intel       DUAL RATE 1G/10G SFP+ SR (bailed)                AFBR-703SDDZ-IN1
  47Intel       DUAL RATE 1G/10G SFP+ SR (bailed)                AFBR-703SDZ-IN2
  48LR Modules
  49Intel       DUAL RATE 1G/10G SFP+ LR (bailed)                FTLX1471D3BCV-IT
  50Intel       DUAL RATE 1G/10G SFP+ LR (bailed)                AFCT-701SDDZ-IN1
  51Intel       DUAL RATE 1G/10G SFP+ LR (bailed)                AFCT-701SDZ-IN2
  52
  53The following is a list of 3rd party SFP+ modules and direct attach cables that
  54have received some testing. Not all modules are applicable to all devices.
  55
  56Supplier   Type                                              Part Numbers
  57
  58Finisar    SFP+ SR bailed, 10g single rate                   FTLX8571D3BCL
  59Avago      SFP+ SR bailed, 10g single rate                   AFBR-700SDZ
  60Finisar    SFP+ LR bailed, 10g single rate                   FTLX1471D3BCL
  61
  62Finisar    DUAL RATE 1G/10G SFP+ SR (No Bail)                FTLX8571D3QCV-IT
  63Avago      DUAL RATE 1G/10G SFP+ SR (No Bail)                AFBR-703SDZ-IN1
  64Finisar    DUAL RATE 1G/10G SFP+ LR (No Bail)                FTLX1471D3QCV-IT
  65Avago      DUAL RATE 1G/10G SFP+ LR (No Bail)                AFCT-701SDZ-IN1
  66Finistar   1000BASE-T SFP                                    FCLF8522P2BTL
  67Avago      1000BASE-T SFP                                    ABCU-5710RZ
  68
  6982599-based adapters support all passive and active limiting direct attach
  70cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
  71
  72Laser turns off for SFP+ when ifconfig down
  73-------------------------------------------
  74"ifconfig down" turns off the laser for 82599-based SFP+ fiber adapters.
  75"ifconfig up" turns on the later.
  76
  77
  7882598-BASED ADAPTERS
  79
  80NOTES for 82598-Based Adapters:
  81- Intel(R) Network Adapters that support removable optical modules only support
  82  their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port
  83  Express Module only supports SR optical modules). If you plug in a different
  84  type of module, the driver will not load.
  85- Hot Swapping/hot plugging optical modules is not supported.
  86- Only single speed, 10 gigabit modules are supported.
  87- LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module
  88  types are not supported. Please see your system documentation for details.
  89
  90The following is a list of 3rd party SFP+ modules and direct attach cables that
  91have received some testing. Not all modules are applicable to all devices.
  92
  93Supplier   Type                                              Part Numbers
  94
  95Finisar    SFP+ SR bailed, 10g single rate                   FTLX8571D3BCL
  96Avago      SFP+ SR bailed, 10g single rate                   AFBR-700SDZ
  97Finisar    SFP+ LR bailed, 10g single rate                   FTLX1471D3BCL
  98
  9982598-based adapters support all passive direct attach cables that comply
 100with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
 101cables are not supported.
 102
 103
 104Flow Control
 105------------
 106Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable
 107receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE
 108frames are generated when the receive packet buffer crosses a predefined
 109threshold.  When rx is enabled, the transmit unit will halt for the time delay
 110specified when a PAUSE frame is received.
 111
 112Flow Control is enabled by default. If you want to disable a flow control
 113capable link partner, use ethtool:
 114
 115     ethtool -A eth? autoneg off RX off TX off
 116
 117NOTE: For 82598 backplane cards entering 1 gig mode, flow control default
 118behavior is changed to off.  Flow control in 1 gig mode on these devices can
 119lead to Tx hangs.
 120
 121Additional Configurations
 122=========================
 123
 124  Jumbo Frames
 125  ------------
 126  The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
 127  enabled by changing the MTU to a value larger than the default of 1500.
 128  The maximum value for the MTU is 16110.  Use the ifconfig command to
 129  increase the MTU size.  For example:
 130
 131        ifconfig ethx mtu 9000 up
 132
 133  The maximum MTU setting for Jumbo Frames is 16110.  This value coincides
 134  with the maximum Jumbo Frames size of 16128.
 135
 136  Generic Receive Offload, aka GRO
 137  --------------------------------
 138  The driver supports the in-kernel software implementation of GRO.  GRO has
 139  shown that by coalescing Rx traffic into larger chunks of data, CPU
 140  utilization can be significantly reduced when under large Rx load.  GRO is an
 141  evolution of the previously-used LRO interface.  GRO is able to coalesce
 142  other protocols besides TCP.  It's also safe to use with configurations that
 143  are problematic for LRO, namely bridging and iSCSI.
 144
 145  Data Center Bridging, aka DCB
 146  -----------------------------
 147  DCB is a configuration Quality of Service implementation in hardware.
 148  It uses the VLAN priority tag (802.1p) to filter traffic.  That means
 149  that there are 8 different priorities that traffic can be filtered into.
 150  It also enables priority flow control which can limit or eliminate the
 151  number of dropped packets during network stress.  Bandwidth can be
 152  allocated to each of these priorities, which is enforced at the hardware
 153  level.
 154
 155  To enable DCB support in ixgbe, you must enable the DCB netlink layer to
 156  allow the userspace tools (see below) to communicate with the driver.
 157  This can be found in the kernel configuration here:
 158
 159        -> Networking support
 160          -> Networking options
 161            -> Data Center Bridging support
 162
 163  Once this is selected, DCB support must be selected for ixgbe.  This can
 164  be found here:
 165
 166        -> Device Drivers
 167          -> Network device support (NETDEVICES [=y])
 168            -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
 169              -> Intel(R) 10GbE PCI Express adapters support
 170                -> Data Center Bridging (DCB) Support
 171
 172  After these options are selected, you must rebuild your kernel and your
 173  modules.
 174
 175  In order to use DCB, userspace tools must be downloaded and installed.
 176  The dcbd tools can be found at:
 177
 178        http://e1000.sf.net
 179
 180  Ethtool
 181  -------
 182  The driver utilizes the ethtool interface for driver configuration and
 183  diagnostics, as well as displaying statistical information. The latest
 184  ethtool version is required for this functionality.
 185
 186  The latest release of ethtool can be found from
 187  http://ftp.kernel.org/pub/software/network/ethtool/
 188
 189  FCoE
 190  ----
 191  This release of the ixgbe driver contains new code to enable users to use
 192  Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB)
 193  functionality that is supported by the 82598-based hardware.  This code has
 194  no default effect on the regular driver operation, and configuring DCB and
 195  FCoE is outside the scope of this driver README. Refer to
 196  http://www.open-fcoe.org/ for FCoE project information and contact
 197  e1000-eedc@lists.sourceforge.net for DCB information.
 198
 199  MAC and VLAN anti-spoofing feature
 200  ----------------------------------
 201  When a malicious driver attempts to send a spoofed packet, it is dropped by
 202  the hardware and not transmitted.  An interrupt is sent to the PF driver
 203  notifying it of the spoof attempt.
 204
 205  When a spoofed packet is detected the PF driver will send the following
 206  message to the system log (displayed by  the "dmesg" command):
 207
 208  Spoof event(s) detected on VF (n)
 209
 210  Where n=the VF that attempted to do the spoofing.
 211
 212
 213Performance Tuning
 214==================
 215
 216An excellent article on performance tuning can be found at:
 217
 218http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
 219
 220
 221Known Issues
 222============
 223
 224  Enabling SR-IOV in a 32-bit Microsoft* Windows* Server 2008 Guest OS using
 225  Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE controller under KVM
 226  -----------------------------------------------------------------------------
 227  KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM.  This
 228  includes traditional PCIe devices, as well as SR-IOV-capable devices using
 229  Intel 82576-based and 82599-based controllers.
 230
 231  While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF)
 232  to a Linux-based VM running 2.6.32 or later kernel works fine, there is a
 233  known issue with Microsoft Windows Server 2008 VM that results in a "yellow
 234  bang" error. This problem is within the KVM VMM itself, not the Intel driver,
 235  or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU
 236  model for the guests, and this older CPU model does not support MSI-X
 237  interrupts, which is a requirement for Intel SR-IOV.
 238
 239  If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode
 240  with KVM and a Microsoft Windows Server 2008 guest try the following
 241  workaround. The workaround is to tell KVM to emulate a different model of CPU
 242  when using qemu to create the KVM guest:
 243
 244       "-cpu qemu64,model=13"
 245
 246
 247Support
 248=======
 249
 250For general information, go to the Intel support website at:
 251
 252    http://support.intel.com
 253
 254or the Intel Wired Networking project hosted by Sourceforge at:
 255
 256    http://e1000.sourceforge.net
 257
 258If an issue is identified with the released source code on the supported
 259kernel with a supported adapter, email the specific information related
 260to the issue to e1000-devel@lists.sf.net
 261
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.