linux/Documentation/networking/scaling.txt
<<
>>
Prefs
   1Scaling in the Linux Networking Stack
   2
   3
   4Introduction
   5============
   6
   7This document describes a set of complementary techniques in the Linux
   8networking stack to increase parallelism and improve performance for
   9multi-processor systems.
  10
  11The following technologies are described:
  12
  13  RSS: Receive Side Scaling
  14  RPS: Receive Packet Steering
  15  RFS: Receive Flow Steering
  16  Accelerated Receive Flow Steering
  17  XPS: Transmit Packet Steering
  18
  19
  20RSS: Receive Side Scaling
  21=========================
  22
  23Contemporary NICs support multiple receive and transmit descriptor queues
  24(multi-queue). On reception, a NIC can send different packets to different
  25queues to distribute processing among CPUs. The NIC distributes packets by
  26applying a filter to each packet that assigns it to one of a small number
  27of logical flows. Packets for each flow are steered to a separate receive
  28queue, which in turn can be processed by separate CPUs. This mechanism is
  29generally known as “Receive-side Scaling” (RSS). The goal of RSS and
  30the other scaling techniques is to increase performance uniformly.
  31Multi-queue distribution can also be used for traffic prioritization, but
  32that is not the focus of these techniques.
  33
  34The filter used in RSS is typically a hash function over the network
  35and/or transport layer headers-- for example, a 4-tuple hash over
  36IP addresses and TCP ports of a packet. The most common hardware
  37implementation of RSS uses a 128-entry indirection table where each entry
  38stores a queue number. The receive queue for a packet is determined
  39by masking out the low order seven bits of the computed hash for the
  40packet (usually a Toeplitz hash), taking this number as a key into the
  41indirection table and reading the corresponding value.
  42
  43Some advanced NICs allow steering packets to queues based on
  44programmable filters. For example, webserver bound TCP port 80 packets
  45can be directed to their own receive queue. Such “n-tuple” filters can
  46be configured from ethtool (--config-ntuple).
  47
  48==== RSS Configuration
  49
  50The driver for a multi-queue capable NIC typically provides a kernel
  51module parameter for specifying the number of hardware queues to
  52configure. In the bnx2x driver, for instance, this parameter is called
  53num_queues. A typical RSS configuration would be to have one receive queue
  54for each CPU if the device supports enough queues, or otherwise at least
  55one for each memory domain, where a memory domain is a set of CPUs that
  56share a particular memory level (L1, L2, NUMA node, etc.).
  57
  58The indirection table of an RSS device, which resolves a queue by masked
  59hash, is usually programmed by the driver at initialization. The
  60default mapping is to distribute the queues evenly in the table, but the
  61indirection table can be retrieved and modified at runtime using ethtool
  62commands (--show-rxfh-indir and --set-rxfh-indir). Modifying the
  63indirection table could be done to give different queues different
  64relative weights.
  65
  66== RSS IRQ Configuration
  67
  68Each receive queue has a separate IRQ associated with it. The NIC triggers
  69this to notify a CPU when new packets arrive on the given queue. The
  70signaling path for PCIe devices uses message signaled interrupts (MSI-X),
  71that can route each interrupt to a particular CPU. The active mapping
  72of queues to IRQs can be determined from /proc/interrupts. By default,
  73an IRQ may be handled on any CPU. Because a non-negligible part of packet
  74processing takes place in receive interrupt handling, it is advantageous
  75to spread receive interrupts between CPUs. To manually adjust the IRQ
  76affinity of each interrupt see Documentation/IRQ-affinity.txt. Some systems
  77will be running irqbalance, a daemon that dynamically optimizes IRQ
  78assignments and as a result may override any manual settings.
  79
  80== Suggested Configuration
  81
  82RSS should be enabled when latency is a concern or whenever receive
  83interrupt processing forms a bottleneck. Spreading load between CPUs
  84decreases queue length. For low latency networking, the optimal setting
  85is to allocate as many queues as there are CPUs in the system (or the
  86NIC maximum, if lower). The most efficient high-rate configuration
  87is likely the one with the smallest number of receive queues where no
  88receive queue overflows due to a saturated CPU, because in default
  89mode with interrupt coalescing enabled, the aggregate number of
  90interrupts (and thus work) grows with each additional queue.
  91
  92Per-cpu load can be observed using the mpstat utility, but note that on
  93processors with hyperthreading (HT), each hyperthread is represented as
  94a separate CPU. For interrupt handling, HT has shown no benefit in
  95initial tests, so limit the number of queues to the number of CPU cores
  96in the system.
  97
  98
  99RPS: Receive Packet Steering
 100============================
 101
 102Receive Packet Steering (RPS) is logically a software implementation of
 103RSS. Being in software, it is necessarily called later in the datapath.
 104Whereas RSS selects the queue and hence CPU that will run the hardware
 105interrupt handler, RPS selects the CPU to perform protocol processing
 106above the interrupt handler. This is accomplished by placing the packet
 107on the desired CPU’s backlog queue and waking up the CPU for processing.
 108RPS has some advantages over RSS: 1) it can be used with any NIC,
 1092) software filters can easily be added to hash over new protocols,
 1103) it does not increase hardware device interrupt rate (although it does
 111introduce inter-processor interrupts (IPIs)).
 112
 113RPS is called during bottom half of the receive interrupt handler, when
 114a driver sends a packet up the network stack with netif_rx() or
 115netif_receive_skb(). These call the get_rps_cpu() function, which
 116selects the queue that should process a packet.
 117
 118The first step in determining the target CPU for RPS is to calculate a
 119flow hash over the packet’s addresses or ports (2-tuple or 4-tuple hash
 120depending on the protocol). This serves as a consistent hash of the
 121associated flow of the packet. The hash is either provided by hardware
 122or will be computed in the stack. Capable hardware can pass the hash in
 123the receive descriptor for the packet; this would usually be the same
 124hash used for RSS (e.g. computed Toeplitz hash). The hash is saved in
 125skb->rx_hash and can be used elsewhere in the stack as a hash of the
 126packet’s flow.
 127
 128Each receive hardware queue has an associated list of CPUs to which
 129RPS may enqueue packets for processing. For each received packet,
 130an index into the list is computed from the flow hash modulo the size
 131of the list. The indexed CPU is the target for processing the packet,
 132and the packet is queued to the tail of that CPU’s backlog queue. At
 133the end of the bottom half routine, IPIs are sent to any CPUs for which
 134packets have been queued to their backlog queue. The IPI wakes backlog
 135processing on the remote CPU, and any queued packets are then processed
 136up the networking stack.
 137
 138==== RPS Configuration
 139
 140RPS requires a kernel compiled with the CONFIG_RPS kconfig symbol (on
 141by default for SMP). Even when compiled in, RPS remains disabled until
 142explicitly configured. The list of CPUs to which RPS may forward traffic
 143can be configured for each receive queue using a sysfs file entry:
 144
 145 /sys/class/net/<dev>/queues/rx-<n>/rps_cpus
 146
 147This file implements a bitmap of CPUs. RPS is disabled when it is zero
 148(the default), in which case packets are processed on the interrupting
 149CPU. Documentation/IRQ-affinity.txt explains how CPUs are assigned to
 150the bitmap.
 151
 152== Suggested Configuration
 153
 154For a single queue device, a typical RPS configuration would be to set
 155the rps_cpus to the CPUs in the same memory domain of the interrupting
 156CPU. If NUMA locality is not an issue, this could also be all CPUs in
 157the system. At high interrupt rate, it might be wise to exclude the
 158interrupting CPU from the map since that already performs much work.
 159
 160For a multi-queue system, if RSS is configured so that a hardware
 161receive queue is mapped to each CPU, then RPS is probably redundant
 162and unnecessary. If there are fewer hardware queues than CPUs, then
 163RPS might be beneficial if the rps_cpus for each queue are the ones that
 164share the same memory domain as the interrupting CPU for that queue.
 165
 166
 167RFS: Receive Flow Steering
 168==========================
 169
 170While RPS steers packets solely based on hash, and thus generally
 171provides good load distribution, it does not take into account
 172application locality. This is accomplished by Receive Flow Steering
 173(RFS). The goal of RFS is to increase datacache hitrate by steering
 174kernel processing of packets to the CPU where the application thread
 175consuming the packet is running. RFS relies on the same RPS mechanisms
 176to enqueue packets onto the backlog of another CPU and to wake up that
 177CPU.
 178
 179In RFS, packets are not forwarded directly by the value of their hash,
 180but the hash is used as index into a flow lookup table. This table maps
 181flows to the CPUs where those flows are being processed. The flow hash
 182(see RPS section above) is used to calculate the index into this table.
 183The CPU recorded in each entry is the one which last processed the flow.
 184If an entry does not hold a valid CPU, then packets mapped to that entry
 185are steered using plain RPS. Multiple table entries may point to the
 186same CPU. Indeed, with many flows and few CPUs, it is very likely that
 187a single application thread handles flows with many different flow hashes.
 188
 189rps_sock_flow_table is a global flow table that contains the *desired* CPU
 190for flows: the CPU that is currently processing the flow in userspace.
 191Each table value is a CPU index that is updated during calls to recvmsg
 192and sendmsg (specifically, inet_recvmsg(), inet_sendmsg(), inet_sendpage()
 193and tcp_splice_read()).
 194
 195When the scheduler moves a thread to a new CPU while it has outstanding
 196receive packets on the old CPU, packets may arrive out of order. To
 197avoid this, RFS uses a second flow table to track outstanding packets
 198for each flow: rps_dev_flow_table is a table specific to each hardware
 199receive queue of each device. Each table value stores a CPU index and a
 200counter. The CPU index represents the *current* CPU onto which packets
 201for this flow are enqueued for further kernel processing. Ideally, kernel
 202and userspace processing occur on the same CPU, and hence the CPU index
 203in both tables is identical. This is likely false if the scheduler has
 204recently migrated a userspace thread while the kernel still has packets
 205enqueued for kernel processing on the old CPU.
 206
 207The counter in rps_dev_flow_table values records the length of the current
 208CPU's backlog when a packet in this flow was last enqueued. Each backlog
 209queue has a head counter that is incremented on dequeue. A tail counter
 210is computed as head counter + queue length. In other words, the counter
 211in rps_dev_flow[i] records the last element in flow i that has
 212been enqueued onto the currently designated CPU for flow i (of course,
 213entry i is actually selected by hash and multiple flows may hash to the
 214same entry i).
 215
 216And now the trick for avoiding out of order packets: when selecting the
 217CPU for packet processing (from get_rps_cpu()) the rps_sock_flow table
 218and the rps_dev_flow table of the queue that the packet was received on
 219are compared. If the desired CPU for the flow (found in the
 220rps_sock_flow table) matches the current CPU (found in the rps_dev_flow
 221table), the packet is enqueued onto that CPU’s backlog. If they differ,
 222the current CPU is updated to match the desired CPU if one of the
 223following is true:
 224
 225- The current CPU's queue head counter >= the recorded tail counter
 226  value in rps_dev_flow[i]
 227- The current CPU is unset (equal to RPS_NO_CPU)
 228- The current CPU is offline
 229
 230After this check, the packet is sent to the (possibly updated) current
 231CPU. These rules aim to ensure that a flow only moves to a new CPU when
 232there are no packets outstanding on the old CPU, as the outstanding
 233packets could arrive later than those about to be processed on the new
 234CPU.
 235
 236==== RFS Configuration
 237
 238RFS is only available if the kconfig symbol CONFIG_RPS is enabled (on
 239by default for SMP). The functionality remains disabled until explicitly
 240configured. The number of entries in the global flow table is set through:
 241
 242 /proc/sys/net/core/rps_sock_flow_entries
 243
 244The number of entries in the per-queue flow table are set through:
 245
 246 /sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt
 247
 248== Suggested Configuration
 249
 250Both of these need to be set before RFS is enabled for a receive queue.
 251Values for both are rounded up to the nearest power of two. The
 252suggested flow count depends on the expected number of active connections
 253at any given time, which may be significantly less than the number of open
 254connections. We have found that a value of 32768 for rps_sock_flow_entries
 255works fairly well on a moderately loaded server.
 256
 257For a single queue device, the rps_flow_cnt value for the single queue
 258would normally be configured to the same value as rps_sock_flow_entries.
 259For a multi-queue device, the rps_flow_cnt for each queue might be
 260configured as rps_sock_flow_entries / N, where N is the number of
 261queues. So for instance, if rps_sock_flow_entries is set to 32768 and there
 262are 16 configured receive queues, rps_flow_cnt for each queue might be
 263configured as 2048.
 264
 265
 266Accelerated RFS
 267===============
 268
 269Accelerated RFS is to RFS what RSS is to RPS: a hardware-accelerated load
 270balancing mechanism that uses soft state to steer flows based on where
 271the application thread consuming the packets of each flow is running.
 272Accelerated RFS should perform better than RFS since packets are sent
 273directly to a CPU local to the thread consuming the data. The target CPU
 274will either be the same CPU where the application runs, or at least a CPU
 275which is local to the application thread’s CPU in the cache hierarchy.
 276
 277To enable accelerated RFS, the networking stack calls the
 278ndo_rx_flow_steer driver function to communicate the desired hardware
 279queue for packets matching a particular flow. The network stack
 280automatically calls this function every time a flow entry in
 281rps_dev_flow_table is updated. The driver in turn uses a device specific
 282method to program the NIC to steer the packets.
 283
 284The hardware queue for a flow is derived from the CPU recorded in
 285rps_dev_flow_table. The stack consults a CPU to hardware queue map which
 286is maintained by the NIC driver. This is an auto-generated reverse map of
 287the IRQ affinity table shown by /proc/interrupts. Drivers can use
 288functions in the cpu_rmap (“CPU affinity reverse map”) kernel library
 289to populate the map. For each CPU, the corresponding queue in the map is
 290set to be one whose processing CPU is closest in cache locality.
 291
 292==== Accelerated RFS Configuration
 293
 294Accelerated RFS is only available if the kernel is compiled with
 295CONFIG_RFS_ACCEL and support is provided by the NIC device and driver.
 296It also requires that ntuple filtering is enabled via ethtool. The map
 297of CPU to queues is automatically deduced from the IRQ affinities
 298configured for each receive queue by the driver, so no additional
 299configuration should be necessary.
 300
 301== Suggested Configuration
 302
 303This technique should be enabled whenever one wants to use RFS and the
 304NIC supports hardware acceleration.
 305
 306XPS: Transmit Packet Steering
 307=============================
 308
 309Transmit Packet Steering is a mechanism for intelligently selecting
 310which transmit queue to use when transmitting a packet on a multi-queue
 311device. To accomplish this, a mapping from CPU to hardware queue(s) is
 312recorded. The goal of this mapping is usually to assign queues
 313exclusively to a subset of CPUs, where the transmit completions for
 314these queues are processed on a CPU within this set. This choice
 315provides two benefits. First, contention on the device queue lock is
 316significantly reduced since fewer CPUs contend for the same queue
 317(contention can be eliminated completely if each CPU has its own
 318transmit queue). Secondly, cache miss rate on transmit completion is
 319reduced, in particular for data cache lines that hold the sk_buff
 320structures.
 321
 322XPS is configured per transmit queue by setting a bitmap of CPUs that
 323may use that queue to transmit. The reverse mapping, from CPUs to
 324transmit queues, is computed and maintained for each network device.
 325When transmitting the first packet in a flow, the function
 326get_xps_queue() is called to select a queue. This function uses the ID
 327of the running CPU as a key into the CPU-to-queue lookup table. If the
 328ID matches a single queue, that is used for transmission. If multiple
 329queues match, one is selected by using the flow hash to compute an index
 330into the set.
 331
 332The queue chosen for transmitting a particular flow is saved in the
 333corresponding socket structure for the flow (e.g. a TCP connection).
 334This transmit queue is used for subsequent packets sent on the flow to
 335prevent out of order (ooo) packets. The choice also amortizes the cost
 336of calling get_xps_queues() over all packets in the flow. To avoid
 337ooo packets, the queue for a flow can subsequently only be changed if
 338skb->ooo_okay is set for a packet in the flow. This flag indicates that
 339there are no outstanding packets in the flow, so the transmit queue can
 340change without the risk of generating out of order packets. The
 341transport layer is responsible for setting ooo_okay appropriately. TCP,
 342for instance, sets the flag when all data for a connection has been
 343acknowledged.
 344
 345==== XPS Configuration
 346
 347XPS is only available if the kconfig symbol CONFIG_XPS is enabled (on by
 348default for SMP). The functionality remains disabled until explicitly
 349configured. To enable XPS, the bitmap of CPUs that may use a transmit
 350queue is configured using the sysfs file entry:
 351
 352/sys/class/net/<dev>/queues/tx-<n>/xps_cpus
 353
 354== Suggested Configuration
 355
 356For a network device with a single transmission queue, XPS configuration
 357has no effect, since there is no choice in this case. In a multi-queue
 358system, XPS is preferably configured so that each CPU maps onto one queue.
 359If there are as many queues as there are CPUs in the system, then each
 360queue can also map onto one CPU, resulting in exclusive pairings that
 361experience no contention. If there are fewer queues than CPUs, then the
 362best CPUs to share a given queue are probably those that share the cache
 363with the CPU that processes transmit completions for that queue
 364(transmit interrupts).
 365
 366
 367Further Information
 368===================
 369RPS and RFS were introduced in kernel 2.6.35. XPS was incorporated into
 3702.6.38. Original patches were submitted by Tom Herbert
 371(therbert@google.com)
 372
 373Accelerated RFS was introduced in 2.6.35. Original patches were
 374submitted by Ben Hutchings (bhutchings@solarflare.com)
 375
 376Authors:
 377Tom Herbert (therbert@google.com)
 378Willem de Bruijn (willemb@google.com)
 379
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.