linux/Documentation/sysctl/vm.txt
<<
>>
Prefs
   1Documentation for /proc/sys/vm/*        kernel version 2.6.29
   2        (c) 1998, 1999,  Rik van Riel <riel@nl.linux.org>
   3        (c) 2008         Peter W. Morreale <pmorreale@novell.com>
   4
   5For general info and legal blurb, please look in README.
   6
   7==============================================================
   8
   9This file contains the documentation for the sysctl files in
  10/proc/sys/vm and is valid for Linux kernel version 2.6.29.
  11
  12The files in this directory can be used to tune the operation
  13of the virtual memory (VM) subsystem of the Linux kernel and
  14the writeout of dirty data to disk.
  15
  16Default values and initialization routines for most of these
  17files can be found in mm/swap.c.
  18
  19Currently, these files are in /proc/sys/vm:
  20
  21- block_dump
  22- compact_memory
  23- dirty_background_bytes
  24- dirty_background_ratio
  25- dirty_bytes
  26- dirty_expire_centisecs
  27- dirty_ratio
  28- dirty_writeback_centisecs
  29- drop_caches
  30- extfrag_threshold
  31- hugepages_treat_as_movable
  32- hugetlb_shm_group
  33- laptop_mode
  34- legacy_va_layout
  35- lowmem_reserve_ratio
  36- max_map_count
  37- memory_failure_early_kill
  38- memory_failure_recovery
  39- min_free_kbytes
  40- min_slab_ratio
  41- min_unmapped_ratio
  42- mmap_min_addr
  43- nr_hugepages
  44- nr_overcommit_hugepages
  45- nr_trim_pages         (only if CONFIG_MMU=n)
  46- numa_zonelist_order
  47- oom_dump_tasks
  48- oom_kill_allocating_task
  49- overcommit_memory
  50- overcommit_ratio
  51- page-cluster
  52- panic_on_oom
  53- percpu_pagelist_fraction
  54- stat_interval
  55- swappiness
  56- vfs_cache_pressure
  57- zone_reclaim_mode
  58
  59==============================================================
  60
  61block_dump
  62
  63block_dump enables block I/O debugging when set to a nonzero value. More
  64information on block I/O debugging is in Documentation/laptops/laptop-mode.txt.
  65
  66==============================================================
  67
  68compact_memory
  69
  70Available only when CONFIG_COMPACTION is set. When 1 is written to the file,
  71all zones are compacted such that free memory is available in contiguous
  72blocks where possible. This can be important for example in the allocation of
  73huge pages although processes will also directly compact memory as required.
  74
  75==============================================================
  76
  77dirty_background_bytes
  78
  79Contains the amount of dirty memory at which the background kernel
  80flusher threads will start writeback.
  81
  82Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only
  83one of them may be specified at a time. When one sysctl is written it is
  84immediately taken into account to evaluate the dirty memory limits and the
  85other appears as 0 when read.
  86
  87==============================================================
  88
  89dirty_background_ratio
  90
  91Contains, as a percentage of total system memory, the number of pages at which
  92the background kernel flusher threads will start writing out dirty data.
  93
  94==============================================================
  95
  96dirty_bytes
  97
  98Contains the amount of dirty memory at which a process generating disk writes
  99will itself start writeback.
 100
 101Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be
 102specified at a time. When one sysctl is written it is immediately taken into
 103account to evaluate the dirty memory limits and the other appears as 0 when
 104read.
 105
 106Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
 107value lower than this limit will be ignored and the old configuration will be
 108retained.
 109
 110==============================================================
 111
 112dirty_expire_centisecs
 113
 114This tunable is used to define when dirty data is old enough to be eligible
 115for writeout by the kernel flusher threads.  It is expressed in 100'ths
 116of a second.  Data which has been dirty in-memory for longer than this
 117interval will be written out next time a flusher thread wakes up.
 118
 119==============================================================
 120
 121dirty_ratio
 122
 123Contains, as a percentage of total system memory, the number of pages at which
 124a process which is generating disk writes will itself start writing out dirty
 125data.
 126
 127==============================================================
 128
 129dirty_writeback_centisecs
 130
 131The kernel flusher threads will periodically wake up and write `old' data
 132out to disk.  This tunable expresses the interval between those wakeups, in
 133100'ths of a second.
 134
 135Setting this to zero disables periodic writeback altogether.
 136
 137==============================================================
 138
 139drop_caches
 140
 141Writing to this will cause the kernel to drop clean caches, dentries and
 142inodes from memory, causing that memory to become free.
 143
 144To free pagecache:
 145        echo 1 > /proc/sys/vm/drop_caches
 146To free dentries and inodes:
 147        echo 2 > /proc/sys/vm/drop_caches
 148To free pagecache, dentries and inodes:
 149        echo 3 > /proc/sys/vm/drop_caches
 150
 151As this is a non-destructive operation and dirty objects are not freeable, the
 152user should run `sync' first.
 153
 154==============================================================
 155
 156extfrag_threshold
 157
 158This parameter affects whether the kernel will compact memory or direct
 159reclaim to satisfy a high-order allocation. /proc/extfrag_index shows what
 160the fragmentation index for each order is in each zone in the system. Values
 161tending towards 0 imply allocations would fail due to lack of memory,
 162values towards 1000 imply failures are due to fragmentation and -1 implies
 163that the allocation will succeed as long as watermarks are met.
 164
 165The kernel will not compact memory in a zone if the
 166fragmentation index is <= extfrag_threshold. The default value is 500.
 167
 168==============================================================
 169
 170hugepages_treat_as_movable
 171
 172This parameter is only useful when kernelcore= is specified at boot time to
 173create ZONE_MOVABLE for pages that may be reclaimed or migrated. Huge pages
 174are not movable so are not normally allocated from ZONE_MOVABLE. A non-zero
 175value written to hugepages_treat_as_movable allows huge pages to be allocated
 176from ZONE_MOVABLE.
 177
 178Once enabled, the ZONE_MOVABLE is treated as an area of memory the huge
 179pages pool can easily grow or shrink within. Assuming that applications are
 180not running that mlock() a lot of memory, it is likely the huge pages pool
 181can grow to the size of ZONE_MOVABLE by repeatedly entering the desired value
 182into nr_hugepages and triggering page reclaim.
 183
 184==============================================================
 185
 186hugetlb_shm_group
 187
 188hugetlb_shm_group contains group id that is allowed to create SysV
 189shared memory segment using hugetlb page.
 190
 191==============================================================
 192
 193laptop_mode
 194
 195laptop_mode is a knob that controls "laptop mode". All the things that are
 196controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt.
 197
 198==============================================================
 199
 200legacy_va_layout
 201
 202If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
 203will use the legacy (2.4) layout for all processes.
 204
 205==============================================================
 206
 207lowmem_reserve_ratio
 208
 209For some specialised workloads on highmem machines it is dangerous for
 210the kernel to allow process memory to be allocated from the "lowmem"
 211zone.  This is because that memory could then be pinned via the mlock()
 212system call, or by unavailability of swapspace.
 213
 214And on large highmem machines this lack of reclaimable lowmem memory
 215can be fatal.
 216
 217So the Linux page allocator has a mechanism which prevents allocations
 218which _could_ use highmem from using too much lowmem.  This means that
 219a certain amount of lowmem is defended from the possibility of being
 220captured into pinned user memory.
 221
 222(The same argument applies to the old 16 megabyte ISA DMA region.  This
 223mechanism will also defend that region from allocations which could use
 224highmem or lowmem).
 225
 226The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is
 227in defending these lower zones.
 228
 229If you have a machine which uses highmem or ISA DMA and your
 230applications are using mlock(), or if you are running with no swap then
 231you probably should change the lowmem_reserve_ratio setting.
 232
 233The lowmem_reserve_ratio is an array. You can see them by reading this file.
 234-
 235% cat /proc/sys/vm/lowmem_reserve_ratio
 236256     256     32
 237-
 238Note: # of this elements is one fewer than number of zones. Because the highest
 239      zone's value is not necessary for following calculation.
 240
 241But, these values are not used directly. The kernel calculates # of protection
 242pages for each zones from them. These are shown as array of protection pages
 243in /proc/zoneinfo like followings. (This is an example of x86-64 box).
 244Each zone has an array of protection pages like this.
 245
 246-
 247Node 0, zone      DMA
 248  pages free     1355
 249        min      3
 250        low      3
 251        high     4
 252        :
 253        :
 254    numa_other   0
 255        protection: (0, 2004, 2004, 2004)
 256        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 257  pagesets
 258    cpu: 0 pcp: 0
 259        :
 260-
 261These protections are added to score to judge whether this zone should be used
 262for page allocation or should be reclaimed.
 263
 264In this example, if normal pages (index=2) are required to this DMA zone and
 265watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should
 266not be used because pages_free(1355) is smaller than watermark + protection[2]
 267(4 + 2004 = 2008). If this protection value is 0, this zone would be used for
 268normal page requirement. If requirement is DMA zone(index=0), protection[0]
 269(=0) is used.
 270
 271zone[i]'s protection[j] is calculated by following expression.
 272
 273(i < j):
 274  zone[i]->protection[j]
 275  = (total sums of present_pages from zone[i+1] to zone[j] on the node)
 276    / lowmem_reserve_ratio[i];
 277(i = j):
 278   (should not be protected. = 0;
 279(i > j):
 280   (not necessary, but looks 0)
 281
 282The default values of lowmem_reserve_ratio[i] are
 283    256 (if zone[i] means DMA or DMA32 zone)
 284    32  (others).
 285As above expression, they are reciprocal number of ratio.
 286256 means 1/256. # of protection pages becomes about "0.39%" of total present
 287pages of higher zones on the node.
 288
 289If you would like to protect more pages, smaller values are effective.
 290The minimum value is 1 (1/1 -> 100%).
 291
 292==============================================================
 293
 294max_map_count:
 295
 296This file contains the maximum number of memory map areas a process
 297may have. Memory map areas are used as a side-effect of calling
 298malloc, directly by mmap and mprotect, and also when loading shared
 299libraries.
 300
 301While most applications need less than a thousand maps, certain
 302programs, particularly malloc debuggers, may consume lots of them,
 303e.g., up to one or two maps per allocation.
 304
 305The default value is 65536.
 306
 307=============================================================
 308
 309memory_failure_early_kill:
 310
 311Control how to kill processes when uncorrected memory error (typically
 312a 2bit error in a memory module) is detected in the background by hardware
 313that cannot be handled by the kernel. In some cases (like the page
 314still having a valid copy on disk) the kernel will handle the failure
 315transparently without affecting any applications. But if there is
 316no other uptodate copy of the data it will kill to prevent any data
 317corruptions from propagating.
 318
 3191: Kill all processes that have the corrupted and not reloadable page mapped
 320as soon as the corruption is detected.  Note this is not supported
 321for a few types of pages, like kernel internally allocated data or
 322the swap cache, but works for the majority of user pages.
 323
 3240: Only unmap the corrupted page from all processes and only kill a process
 325who tries to access it.
 326
 327The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can
 328handle this if they want to.
 329
 330This is only active on architectures/platforms with advanced machine
 331check handling and depends on the hardware capabilities.
 332
 333Applications can override this setting individually with the PR_MCE_KILL prctl
 334
 335==============================================================
 336
 337memory_failure_recovery
 338
 339Enable memory failure recovery (when supported by the platform)
 340
 3411: Attempt recovery.
 342
 3430: Always panic on a memory failure.
 344
 345==============================================================
 346
 347min_free_kbytes:
 348
 349This is used to force the Linux VM to keep a minimum number
 350of kilobytes free.  The VM uses this number to compute a
 351watermark[WMARK_MIN] value for each lowmem zone in the system.
 352Each lowmem zone gets a number of reserved free pages based
 353proportionally on its size.
 354
 355Some minimal amount of memory is needed to satisfy PF_MEMALLOC
 356allocations; if you set this to lower than 1024KB, your system will
 357become subtly broken, and prone to deadlock under high loads.
 358
 359Setting this too high will OOM your machine instantly.
 360
 361=============================================================
 362
 363min_slab_ratio:
 364
 365This is available only on NUMA kernels.
 366
 367A percentage of the total pages in each zone.  On Zone reclaim
 368(fallback from the local zone occurs) slabs will be reclaimed if more
 369than this percentage of pages in a zone are reclaimable slab pages.
 370This insures that the slab growth stays under control even in NUMA
 371systems that rarely perform global reclaim.
 372
 373The default is 5 percent.
 374
 375Note that slab reclaim is triggered in a per zone / node fashion.
 376The process of reclaiming slab memory is currently not node specific
 377and may not be fast.
 378
 379=============================================================
 380
 381min_unmapped_ratio:
 382
 383This is available only on NUMA kernels.
 384
 385This is a percentage of the total pages in each zone. Zone reclaim will
 386only occur if more than this percentage of pages are in a state that
 387zone_reclaim_mode allows to be reclaimed.
 388
 389If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared
 390against all file-backed unmapped pages including swapcache pages and tmpfs
 391files. Otherwise, only unmapped pages backed by normal files but not tmpfs
 392files and similar are considered.
 393
 394The default is 1 percent.
 395
 396==============================================================
 397
 398mmap_min_addr
 399
 400This file indicates the amount of address space  which a user process will
 401be restricted from mmapping.  Since kernel null dereference bugs could
 402accidentally operate based on the information in the first couple of pages
 403of memory userspace processes should not be allowed to write to them.  By
 404default this value is set to 0 and no protections will be enforced by the
 405security module.  Setting this value to something like 64k will allow the
 406vast majority of applications to work correctly and provide defense in depth
 407against future potential kernel bugs.
 408
 409==============================================================
 410
 411nr_hugepages
 412
 413Change the minimum size of the hugepage pool.
 414
 415See Documentation/vm/hugetlbpage.txt
 416
 417==============================================================
 418
 419nr_overcommit_hugepages
 420
 421Change the maximum size of the hugepage pool. The maximum is
 422nr_hugepages + nr_overcommit_hugepages.
 423
 424See Documentation/vm/hugetlbpage.txt
 425
 426==============================================================
 427
 428nr_trim_pages
 429
 430This is available only on NOMMU kernels.
 431
 432This value adjusts the excess page trimming behaviour of power-of-2 aligned
 433NOMMU mmap allocations.
 434
 435A value of 0 disables trimming of allocations entirely, while a value of 1
 436trims excess pages aggressively. Any value >= 1 acts as the watermark where
 437trimming of allocations is initiated.
 438
 439The default value is 1.
 440
 441See Documentation/nommu-mmap.txt for more information.
 442
 443==============================================================
 444
 445numa_zonelist_order
 446
 447This sysctl is only for NUMA.
 448'where the memory is allocated from' is controlled by zonelists.
 449(This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation.
 450 you may be able to read ZONE_DMA as ZONE_DMA32...)
 451
 452In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
 453ZONE_NORMAL -> ZONE_DMA
 454This means that a memory allocation request for GFP_KERNEL will
 455get memory from ZONE_DMA only when ZONE_NORMAL is not available.
 456
 457In NUMA case, you can think of following 2 types of order.
 458Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL
 459
 460(A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
 461(B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
 462
 463Type(A) offers the best locality for processes on Node(0), but ZONE_DMA
 464will be used before ZONE_NORMAL exhaustion. This increases possibility of
 465out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
 466
 467Type(B) cannot offer the best locality but is more robust against OOM of
 468the DMA zone.
 469
 470Type(A) is called as "Node" order. Type (B) is "Zone" order.
 471
 472"Node order" orders the zonelists by node, then by zone within each node.
 473Specify "[Nn]ode" for node order
 474
 475"Zone Order" orders the zonelists by zone type, then by node within each
 476zone.  Specify "[Zz]one" for zone order.
 477
 478Specify "[Dd]efault" to request automatic configuration.  Autoconfiguration
 479will select "node" order in following case.
 480(1) if the DMA zone does not exist or
 481(2) if the DMA zone comprises greater than 50% of the available memory or
 482(3) if any node's DMA zone comprises greater than 60% of its local memory and
 483    the amount of local memory is big enough.
 484
 485Otherwise, "zone" order will be selected. Default order is recommended unless
 486this is causing problems for your system/application.
 487
 488==============================================================
 489
 490oom_dump_tasks
 491
 492Enables a system-wide task dump (excluding kernel threads) to be
 493produced when the kernel performs an OOM-killing and includes such
 494information as pid, uid, tgid, vm size, rss, nr_ptes, swapents,
 495oom_score_adj score, and name.  This is helpful to determine why the
 496OOM killer was invoked, to identify the rogue task that caused it,
 497and to determine why the OOM killer chose the task it did to kill.
 498
 499If this is set to zero, this information is suppressed.  On very
 500large systems with thousands of tasks it may not be feasible to dump
 501the memory state information for each one.  Such systems should not
 502be forced to incur a performance penalty in OOM conditions when the
 503information may not be desired.
 504
 505If this is set to non-zero, this information is shown whenever the
 506OOM killer actually kills a memory-hogging task.
 507
 508The default value is 1 (enabled).
 509
 510==============================================================
 511
 512oom_kill_allocating_task
 513
 514This enables or disables killing the OOM-triggering task in
 515out-of-memory situations.
 516
 517If this is set to zero, the OOM killer will scan through the entire
 518tasklist and select a task based on heuristics to kill.  This normally
 519selects a rogue memory-hogging task that frees up a large amount of
 520memory when killed.
 521
 522If this is set to non-zero, the OOM killer simply kills the task that
 523triggered the out-of-memory condition.  This avoids the expensive
 524tasklist scan.
 525
 526If panic_on_oom is selected, it takes precedence over whatever value
 527is used in oom_kill_allocating_task.
 528
 529The default value is 0.
 530
 531==============================================================
 532
 533overcommit_memory:
 534
 535This value contains a flag that enables memory overcommitment.
 536
 537When this flag is 0, the kernel attempts to estimate the amount
 538of free memory left when userspace requests more memory.
 539
 540When this flag is 1, the kernel pretends there is always enough
 541memory until it actually runs out.
 542
 543When this flag is 2, the kernel uses a "never overcommit"
 544policy that attempts to prevent any overcommit of memory.
 545
 546This feature can be very useful because there are a lot of
 547programs that malloc() huge amounts of memory "just-in-case"
 548and don't use much of it.
 549
 550The default value is 0.
 551
 552See Documentation/vm/overcommit-accounting and
 553security/commoncap.c::cap_vm_enough_memory() for more information.
 554
 555==============================================================
 556
 557overcommit_ratio:
 558
 559When overcommit_memory is set to 2, the committed address
 560space is not permitted to exceed swap plus this percentage
 561of physical RAM.  See above.
 562
 563==============================================================
 564
 565page-cluster
 566
 567page-cluster controls the number of pages up to which consecutive pages
 568are read in from swap in a single attempt. This is the swap counterpart
 569to page cache readahead.
 570The mentioned consecutivity is not in terms of virtual/physical addresses,
 571but consecutive on swap space - that means they were swapped out together.
 572
 573It is a logarithmic value - setting it to zero means "1 page", setting
 574it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
 575Zero disables swap readahead completely.
 576
 577The default value is three (eight pages at a time).  There may be some
 578small benefits in tuning this to a different value if your workload is
 579swap-intensive.
 580
 581Lower values mean lower latencies for initial faults, but at the same time
 582extra faults and I/O delays for following faults if they would have been part of
 583that consecutive pages readahead would have brought in.
 584
 585=============================================================
 586
 587panic_on_oom
 588
 589This enables or disables panic on out-of-memory feature.
 590
 591If this is set to 0, the kernel will kill some rogue process,
 592called oom_killer.  Usually, oom_killer can kill rogue processes and
 593system will survive.
 594
 595If this is set to 1, the kernel panics when out-of-memory happens.
 596However, if a process limits using nodes by mempolicy/cpusets,
 597and those nodes become memory exhaustion status, one process
 598may be killed by oom-killer. No panic occurs in this case.
 599Because other nodes' memory may be free. This means system total status
 600may be not fatal yet.
 601
 602If this is set to 2, the kernel panics compulsorily even on the
 603above-mentioned. Even oom happens under memory cgroup, the whole
 604system panics.
 605
 606The default value is 0.
 6071 and 2 are for failover of clustering. Please select either
 608according to your policy of failover.
 609panic_on_oom=2+kdump gives you very strong tool to investigate
 610why oom happens. You can get snapshot.
 611
 612=============================================================
 613
 614percpu_pagelist_fraction
 615
 616This is the fraction of pages at most (high mark pcp->high) in each zone that
 617are allocated for each per cpu page list.  The min value for this is 8.  It
 618means that we don't allow more than 1/8th of pages in each zone to be
 619allocated in any single per_cpu_pagelist.  This entry only changes the value
 620of hot per cpu pagelists.  User can specify a number like 100 to allocate
 6211/100th of each zone to each per cpu page list.
 622
 623The batch value of each per cpu pagelist is also updated as a result.  It is
 624set to pcp->high/4.  The upper limit of batch is (PAGE_SHIFT * 8)
 625
 626The initial value is zero.  Kernel does not use this value at boot time to set
 627the high water marks for each per cpu page list.
 628
 629==============================================================
 630
 631stat_interval
 632
 633The time interval between which vm statistics are updated.  The default
 634is 1 second.
 635
 636==============================================================
 637
 638swappiness
 639
 640This control is used to define how aggressive the kernel will swap
 641memory pages.  Higher values will increase agressiveness, lower values
 642decrease the amount of swap.
 643
 644The default value is 60.
 645
 646==============================================================
 647
 648vfs_cache_pressure
 649------------------
 650
 651Controls the tendency of the kernel to reclaim the memory which is used for
 652caching of directory and inode objects.
 653
 654At the default value of vfs_cache_pressure=100 the kernel will attempt to
 655reclaim dentries and inodes at a "fair" rate with respect to pagecache and
 656swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
 657to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
 658never reclaim dentries and inodes due to memory pressure and this can easily
 659lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
 660causes the kernel to prefer to reclaim dentries and inodes.
 661
 662==============================================================
 663
 664zone_reclaim_mode:
 665
 666Zone_reclaim_mode allows someone to set more or less aggressive approaches to
 667reclaim memory when a zone runs out of memory. If it is set to zero then no
 668zone reclaim occurs. Allocations will be satisfied from other zones / nodes
 669in the system.
 670
 671This is value ORed together of
 672
 6731       = Zone reclaim on
 6742       = Zone reclaim writes dirty pages out
 6754       = Zone reclaim swaps pages
 676
 677zone_reclaim_mode is set during bootup to 1 if it is determined that pages
 678from remote zones will cause a measurable performance reduction. The
 679page allocator will then reclaim easily reusable pages (those page
 680cache pages that are currently not used) before allocating off node pages.
 681
 682It may be beneficial to switch off zone reclaim if the system is
 683used for a file server and all of memory should be used for caching files
 684from disk. In that case the caching effect is more important than
 685data locality.
 686
 687Allowing zone reclaim to write out pages stops processes that are
 688writing large amounts of data from dirtying pages on other nodes. Zone
 689reclaim will write out dirty pages if a zone fills up and so effectively
 690throttle the process. This may decrease the performance of a single process
 691since it cannot use all of system memory to buffer the outgoing writes
 692anymore but it preserve the memory on other nodes so that the performance
 693of other processes running on other nodes will not be affected.
 694
 695Allowing regular swap effectively restricts allocations to the local
 696node unless explicitly overridden by memory policies or cpuset
 697configurations.
 698
 699============ End of Document =================================
 700
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.