3Zswap is a lightweight compressed cache for swap pages. It takes pages that are
   4in the process of being swapped out and attempts to compress them into a
   5dynamically allocated RAM-based memory pool.  zswap basically trades CPU cycles
   6for potentially reduced swap I/O.\xC2\xA0 This trade-off can also result in a
   7significant performance improvement if reads from the compressed cache are
   8faster than reads from a swap device.
  10NOTE: Zswap is a new feature as of v3.11 and interacts heavily with memory
  11reclaim.  This interaction has not be fully explored on the large set of
  12potential configurations and workloads that exist.  For this reason, zswap
  13is a work in progress and should be considered experimental.
  15Some potential benefits:
  16* Desktop/laptop users with limited RAM capacities can mitigate the
  17\xC2\xA0\xC2\xA0\xC2\xA0 performance impact of swapping.
  18* Overcommitted guests that share a common I/O resource can
  19\xC2\xA0\xC2\xA0\xC2\xA0 dramatically reduce their swap I/O pressure, avoiding heavy handed I/O
  20    throttling by the hypervisor.\xC2\xA0This allows more work to get done with less
  21    impact to the guest workload and guests sharing the I/O subsystem
  22* Users with SSDs as swap devices can extend the life of the device by
  23\xC2\xA0\xC2\xA0\xC2\xA0 drastically reducing life-shortening writes.
  25Zswap evicts pages from compressed cache on an LRU basis to the backing swap
  26device when the compressed pool reaches it size limit.  This requirement had
  27been identified in prior community discussions.
  29To enabled zswap, the "enabled" attribute must be set to 1 at boot time.  e.g.
  34Zswap receives pages for compression through the Frontswap API and is able to
  35evict pages from its own compressed pool on an LRU basis and write them back to
  36the backing swap device in the case that the compressed pool is full.
  38Zswap makes use of zbud for the managing the compressed memory pool.  Each
  39allocation in zbud is not directly accessible by address.  Rather, a handle is
  40return by the allocation routine and that handle must be mapped before being
  41accessed.  The compressed memory pool grows on demand and shrinks as compressed
  42pages are freed.  The pool is not preallocated.
  44When a swap page is passed from frontswap to zswap, zswap maintains a mapping
  45of the swap entry, a combination of the swap type and swap offset, to the zbud
  46handle that references that compressed swap page.  This mapping is achieved
  47with a red-black tree per swap type.  The swap offset is the search key for the
  48tree nodes.
  50During a page fault on a PTE that is a swap entry, frontswap calls the zswap
  51load function to decompress the page into the page allocated by the page fault
  54Once there are no PTEs referencing a swap page stored in zswap (i.e. the count
  55in the swap_map goes to 0) the swap code calls the zswap invalidate function,
  56via frontswap, to free the compressed entry.
  58Zswap seeks to be simple in its policies.  Sysfs attributes allow for one user
  59controlled policies:
  60* max_pool_percent - The maximum percentage of memory that the compressed
  61    pool can occupy.
  63Zswap allows the compressor to be selected at kernel boot time by setting the
  64\xE2\x80\x9Ccompressor\xE2\x80\x9D attribute.  The default compressor is lzo.  e.g.
  67A debugfs interface is provided for various statistic about pool size, number
  68of pages stored, and various counters for the reasons pages are rejected.
  69 kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.