linux/Documentation/cgroups/memory.txt
<<
>>
Prefs
   1Memory Resource Controller
   2
   3NOTE: The Memory Resource Controller has been generically been referred
   4to as the memory controller in this document. Do not confuse memory controller
   5used here with the memory controller that is used in hardware.
   6
   7Salient features
   8
   9a. Enable control of Anonymous, Page Cache (mapped and unmapped) and
  10   Swap Cache memory pages.
  11b. The infrastructure allows easy addition of other types of memory to control
  12c. Provides *zero overhead* for non memory controller users
  13d. Provides a double LRU: global memory pressure causes reclaim from the
  14   global LRU; a cgroup on hitting a limit, reclaims from the per
  15   cgroup LRU
  16
  17Benefits and Purpose of the memory controller
  18
  19The memory controller isolates the memory behaviour of a group of tasks
  20from the rest of the system. The article on LWN [12] mentions some probable
  21uses of the memory controller. The memory controller can be used to
  22
  23a. Isolate an application or a group of applications
  24   Memory hungry applications can be isolated and limited to a smaller
  25   amount of memory.
  26b. Create a cgroup with limited amount of memory, this can be used
  27   as a good alternative to booting with mem=XXXX.
  28c. Virtualization solutions can control the amount of memory they want
  29   to assign to a virtual machine instance.
  30d. A CD/DVD burner could control the amount of memory used by the
  31   rest of the system to ensure that burning does not fail due to lack
  32   of available memory.
  33e. There are several other use cases, find one or use the controller just
  34   for fun (to learn and hack on the VM subsystem).
  35
  361. History
  37
  38The memory controller has a long history. A request for comments for the memory
  39controller was posted by Balbir Singh [1]. At the time the RFC was posted
  40there were several implementations for memory control. The goal of the
  41RFC was to build consensus and agreement for the minimal features required
  42for memory control. The first RSS controller was posted by Balbir Singh[2]
  43in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the
  44RSS controller. At OLS, at the resource management BoF, everyone suggested
  45that we handle both page cache and RSS together. Another request was raised
  46to allow user space handling of OOM. The current memory controller is
  47at version 6; it combines both mapped (RSS) and unmapped Page
  48Cache Control [11].
  49
  502. Memory Control
  51
  52Memory is a unique resource in the sense that it is present in a limited
  53amount. If a task requires a lot of CPU processing, the task can spread
  54its processing over a period of hours, days, months or years, but with
  55memory, the same physical memory needs to be reused to accomplish the task.
  56
  57The memory controller implementation has been divided into phases. These
  58are:
  59
  601. Memory controller
  612. mlock(2) controller
  623. Kernel user memory accounting and slab control
  634. user mappings length controller
  64
  65The memory controller is the first controller developed.
  66
  672.1. Design
  68
  69The core of the design is a counter called the res_counter. The res_counter
  70tracks the current memory usage and limit of the group of processes associated
  71with the controller. Each cgroup has a memory controller specific data
  72structure (mem_cgroup) associated with it.
  73
  742.2. Accounting
  75
  76                +--------------------+
  77                |  mem_cgroup     |
  78                |  (res_counter)     |
  79                +--------------------+
  80                 /            ^      \
  81                /             |       \
  82           +---------------+  |        +---------------+
  83           | mm_struct     |  |....    | mm_struct     |
  84           |               |  |        |               |
  85           +---------------+  |        +---------------+
  86                              |
  87                              + --------------+
  88                                              |
  89           +---------------+           +------+--------+
  90           | page          +---------->  page_cgroup|
  91           |               |           |               |
  92           +---------------+           +---------------+
  93
  94             (Figure 1: Hierarchy of Accounting)
  95
  96
  97Figure 1 shows the important aspects of the controller
  98
  991. Accounting happens per cgroup
 1002. Each mm_struct knows about which cgroup it belongs to
 1013. Each page has a pointer to the page_cgroup, which in turn knows the
 102   cgroup it belongs to
 103
 104The accounting is done as follows: mem_cgroup_charge() is invoked to setup
 105the necessary data structures and check if the cgroup that is being charged
 106is over its limit. If it is then reclaim is invoked on the cgroup.
 107More details can be found in the reclaim section of this document.
 108If everything goes well, a page meta-data-structure called page_cgroup is
 109allocated and associated with the page.  This routine also adds the page to
 110the per cgroup LRU.
 111
 1122.2.1 Accounting details
 113
 114All mapped anon pages (RSS) and cache pages (Page Cache) are accounted.
 115(some pages which never be reclaimable and will not be on global LRU
 116 are not accounted. we just accounts pages under usual vm management.)
 117
 118RSS pages are accounted at page_fault unless they've already been accounted
 119for earlier. A file page will be accounted for as Page Cache when it's
 120inserted into inode (radix-tree). While it's mapped into the page tables of
 121processes, duplicate accounting is carefully avoided.
 122
 123A RSS page is unaccounted when it's fully unmapped. A PageCache page is
 124unaccounted when it's removed from radix-tree.
 125
 126At page migration, accounting information is kept.
 127
 128Note: we just account pages-on-lru because our purpose is to control amount
 129of used pages. not-on-lru pages are tend to be out-of-control from vm view.
 130
 1312.3 Shared Page Accounting
 132
 133Shared pages are accounted on the basis of the first touch approach. The
 134cgroup that first touches a page is accounted for the page. The principle
 135behind this approach is that a cgroup that aggressively uses a shared
 136page will eventually get charged for it (once it is uncharged from
 137the cgroup that brought it in -- this will happen on memory pressure).
 138
 139Exception: If CONFIG_CGROUP_CGROUP_MEM_RES_CTLR_SWAP is not used..
 140When you do swapoff and make swapped-out pages of shmem(tmpfs) to
 141be backed into memory in force, charges for pages are accounted against the
 142caller of swapoff rather than the users of shmem.
 143
 144
 1452.4 Swap Extension (CONFIG_CGROUP_MEM_RES_CTLR_SWAP)
 146Swap Extension allows you to record charge for swap. A swapped-in page is
 147charged back to original page allocator if possible.
 148
 149When swap is accounted, following files are added.
 150 - memory.memsw.usage_in_bytes.
 151 - memory.memsw.limit_in_bytes.
 152
 153usage of mem+swap is limited by memsw.limit_in_bytes.
 154
 155Note: why 'mem+swap' rather than swap.
 156The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
 157to move account from memory to swap...there is no change in usage of
 158mem+swap.
 159
 160In other words, when we want to limit the usage of swap without affecting
 161global LRU, mem+swap limit is better than just limiting swap from OS point
 162of view.
 163
 1642.5 Reclaim
 165
 166Each cgroup maintains a per cgroup LRU that consists of an active
 167and inactive list. When a cgroup goes over its limit, we first try
 168to reclaim memory from the cgroup so as to make space for the new
 169pages that the cgroup has touched. If the reclaim is unsuccessful,
 170an OOM routine is invoked to select and kill the bulkiest task in the
 171cgroup.
 172
 173The reclaim algorithm has not been modified for cgroups, except that
 174pages that are selected for reclaiming come from the per cgroup LRU
 175list.
 176
 1772. Locking
 178
 179The memory controller uses the following hierarchy
 180
 1811. zone->lru_lock is used for selecting pages to be isolated
 1822. mem->per_zone->lru_lock protects the per cgroup LRU (per zone)
 1833. lock_page_cgroup() is used to protect page->page_cgroup
 184
 1853. User Interface
 186
 1870. Configuration
 188
 189a. Enable CONFIG_CGROUPS
 190b. Enable CONFIG_RESOURCE_COUNTERS
 191c. Enable CONFIG_CGROUP_MEM_RES_CTLR
 192
 1931. Prepare the cgroups
 194# mkdir -p /cgroups
 195# mount -t cgroup none /cgroups -o memory
 196
 1972. Make the new group and move bash into it
 198# mkdir /cgroups/0
 199# echo $$ >  /cgroups/0/tasks
 200
 201Since now we're in the 0 cgroup,
 202We can alter the memory limit:
 203# echo 4M > /cgroups/0/memory.limit_in_bytes
 204
 205NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo,
 206mega or gigabytes.
 207
 208# cat /cgroups/0/memory.limit_in_bytes
 2094194304
 210
 211NOTE: The interface has now changed to display the usage in bytes
 212instead of pages
 213
 214We can check the usage:
 215# cat /cgroups/0/memory.usage_in_bytes
 2161216512
 217
 218A successful write to this file does not guarantee a successful set of
 219this limit to the value written into the file.  This can be due to a
 220number of factors, such as rounding up to page boundaries or the total
 221availability of memory on the system.  The user is required to re-read
 222this file after a write to guarantee the value committed by the kernel.
 223
 224# echo 1 > memory.limit_in_bytes
 225# cat memory.limit_in_bytes
 2264096
 227
 228The memory.failcnt field gives the number of times that the cgroup limit was
 229exceeded.
 230
 231The memory.stat file gives accounting information. Now, the number of
 232caches, RSS and Active pages/Inactive pages are shown.
 233
 2344. Testing
 235
 236Balbir posted lmbench, AIM9, LTP and vmmstress results [10] and [11].
 237Apart from that v6 has been tested with several applications and regular
 238daily use. The controller has also been tested on the PPC64, x86_64 and
 239UML platforms.
 240
 2414.1 Troubleshooting
 242
 243Sometimes a user might find that the application under a cgroup is
 244terminated. There are several causes for this:
 245
 2461. The cgroup limit is too low (just too low to do anything useful)
 2472. The user is using anonymous memory and swap is turned off or too low
 248
 249A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of
 250some of the pages cached in the cgroup (page cache pages).
 251
 2524.2 Task migration
 253
 254When a task migrates from one cgroup to another, it's charge is not
 255carried forward. The pages allocated from the original cgroup still
 256remain charged to it, the charge is dropped when the page is freed or
 257reclaimed.
 258
 2594.3 Removing a cgroup
 260
 261A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a
 262cgroup might have some charge associated with it, even though all
 263tasks have migrated away from it.
 264Such charges are freed(at default) or moved to its parent. When moved,
 265both of RSS and CACHES are moved to parent.
 266If both of them are busy, rmdir() returns -EBUSY. See 5.1 Also.
 267
 268Charges recorded in swap information is not updated at removal of cgroup.
 269Recorded information is discarded and a cgroup which uses swap (swapcache)
 270will be charged as a new owner of it.
 271
 272
 2735. Misc. interfaces.
 274
 2755.1 force_empty
 276  memory.force_empty interface is provided to make cgroup's memory usage empty.
 277  You can use this interface only when the cgroup has no tasks.
 278  When writing anything to this
 279
 280  # echo 0 > memory.force_empty
 281
 282  Almost all pages tracked by this memcg will be unmapped and freed. Some of
 283  pages cannot be freed because it's locked or in-use. Such pages are moved
 284  to parent and this cgroup will be empty. But this may return -EBUSY in
 285  some too busy case.
 286
 287  Typical use case of this interface is that calling this before rmdir().
 288  Because rmdir() moves all pages to parent, some out-of-use page caches can be
 289  moved to the parent. If you want to avoid that, force_empty will be useful.
 290
 2915.2 stat file
 292
 293memory.stat file includes following statistics
 294
 295cache           - # of bytes of page cache memory.
 296rss             - # of bytes of anonymous and swap cache memory.
 297pgpgin          - # of pages paged in (equivalent to # of charging events).
 298pgpgout         - # of pages paged out (equivalent to # of uncharging events).
 299active_anon     - # of bytes of anonymous and  swap cache memory on active
 300                  lru list.
 301inactive_anon   - # of bytes of anonymous memory and swap cache memory on
 302                  inactive lru list.
 303active_file     - # of bytes of file-backed memory on active lru list.
 304inactive_file   - # of bytes of file-backed memory on inactive lru list.
 305unevictable     - # of bytes of memory that cannot be reclaimed (mlocked etc).
 306
 307The following additional stats are dependent on CONFIG_DEBUG_VM.
 308
 309inactive_ratio          - VM internal parameter. (see mm/page_alloc.c)
 310recent_rotated_anon     - VM internal parameter. (see mm/vmscan.c)
 311recent_rotated_file     - VM internal parameter. (see mm/vmscan.c)
 312recent_scanned_anon     - VM internal parameter. (see mm/vmscan.c)
 313recent_scanned_file     - VM internal parameter. (see mm/vmscan.c)
 314
 315Memo:
 316        recent_rotated means recent frequency of lru rotation.
 317        recent_scanned means recent # of scans to lru.
 318        showing for better debug please see the code for meanings.
 319
 320Note:
 321        Only anonymous and swap cache memory is listed as part of 'rss' stat.
 322        This should not be confused with the true 'resident set size' or the
 323        amount of physical memory used by the cgroup. Per-cgroup rss
 324        accounting is not done yet.
 325
 3265.3 swappiness
 327  Similar to /proc/sys/vm/swappiness, but affecting a hierarchy of groups only.
 328
 329  Following cgroups' swapiness can't be changed.
 330  - root cgroup (uses /proc/sys/vm/swappiness).
 331  - a cgroup which uses hierarchy and it has child cgroup.
 332  - a cgroup which uses hierarchy and not the root of hierarchy.
 333
 334
 3356. Hierarchy support
 336
 337The memory controller supports a deep hierarchy and hierarchical accounting.
 338The hierarchy is created by creating the appropriate cgroups in the
 339cgroup filesystem. Consider for example, the following cgroup filesystem
 340hierarchy
 341
 342                root
 343             /  |   \
 344           /    |    \
 345          a     b       c
 346                        | \
 347                        |  \
 348                        d   e
 349
 350In the diagram above, with hierarchical accounting enabled, all memory
 351usage of e, is accounted to its ancestors up until the root (i.e, c and root),
 352that has memory.use_hierarchy enabled.  If one of the ancestors goes over its
 353limit, the reclaim algorithm reclaims from the tasks in the ancestor and the
 354children of the ancestor.
 355
 3566.1 Enabling hierarchical accounting and reclaim
 357
 358The memory controller by default disables the hierarchy feature. Support
 359can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup
 360
 361# echo 1 > memory.use_hierarchy
 362
 363The feature can be disabled by
 364
 365# echo 0 > memory.use_hierarchy
 366
 367NOTE1: Enabling/disabling will fail if the cgroup already has other
 368cgroups created below it.
 369
 370NOTE2: This feature can be enabled/disabled per subtree.
 371
 3727. TODO
 373
 3741. Add support for accounting huge pages (as a separate controller)
 3752. Make per-cgroup scanner reclaim not-shared pages first
 3763. Teach controller to account for shared-pages
 3774. Start reclamation in the background when the limit is
 378   not yet hit but the usage is getting closer
 379
 380Summary
 381
 382Overall, the memory controller has been a stable controller and has been
 383commented and discussed quite extensively in the community.
 384
 385References
 386
 3871. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/
 3882. Singh, Balbir. Memory Controller (RSS Control),
 389   http://lwn.net/Articles/222762/
 3903. Emelianov, Pavel. Resource controllers based on process cgroups
 391   http://lkml.org/lkml/2007/3/6/198
 3924. Emelianov, Pavel. RSS controller based on process cgroups (v2)
 393   http://lkml.org/lkml/2007/4/9/78
 3945. Emelianov, Pavel. RSS controller based on process cgroups (v3)
 395   http://lkml.org/lkml/2007/5/30/244
 3966. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/
 3977. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control
 398   subsystem (v3), http://lwn.net/Articles/235534/
 3998. Singh, Balbir. RSS controller v2 test results (lmbench),
 400   http://lkml.org/lkml/2007/5/17/232
 4019. Singh, Balbir. RSS controller v2 AIM9 results
 402   http://lkml.org/lkml/2007/5/18/1
 40310. Singh, Balbir. Memory controller v6 test results,
 404    http://lkml.org/lkml/2007/8/19/36
 40511. Singh, Balbir. Memory controller introduction (v6),
 406    http://lkml.org/lkml/2007/8/17/69
 40712. Corbet, Jonathan, Controlling memory use in cgroups,
 408    http://lwn.net/Articles/243795/
 409
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.