linux/Documentation/vm/cleancache.txt
<<
>
href="../linux+v3.7.4/Documentation/vm/cleancache.txt">> >>>
> >
> > >
> onclick="return ajax_prefs();">> Prefs >
> >
>
   1MOTIVATION
   2>   3Cleancache is a new optional feature provided by the VFS layer that>   4potentially dramatically increases page cache effectiveness for>   5many workloads in many environments at a negligible cost.>   6>   7Cleancache can be thought of as a page-granularity victim cache for clean>   8pages that the kernel's pageframe replacement algorithm (PFRA) would like>   9to keep around, but can't since there isn't enough memory.  So when the>  11PFRA "evicts" a page, it first attempts to use cleancache code to>  11put the data contained in that page into "transcendent memory", memory>  12that is not directly accessible or addressable by the kernel and is>  13of unknown and possibly time-varying size.>  14>  15Later, when a cleancache-enabled filesystem wishes to access a page>  16in a file on disk, it first checks cleancache to see if it already>  17contains it; if it does, the page of data is copied into the kernel>  18and a disk access is avoided.>  19>  21Transcendent memory "drivers" for cleancache are currently implemented>  21in Xen (using hypervisor memory) and zcache (using in-kernel compressed>  22memory) and other implementations are in development.>  23>  24FAQs are included below.>  25>  26IMPLEMENTATION OVERVIEW>  27>  28A cleancache "backend" that provides transcendent memory registers itself>  29to the kernel's cleancache "frontend" by calling cleancache_register_ops,>  30passing a pointer to a cleancache_ops structure with funcs set appropriately.>  31Note that cleancache_register_ops returns the previous settings so that>  32chaining can be performed if desired. The functions provided must conform to>  33certain semantics as follows:>  34>  35Most important, cleancache is "ephemeral".  Pages which are copied into>  36cleancache have an indefinite lifetime which is completely unknowable>  37by the kernel and so may or may not still be in cleancache at any later time.>  38Thus, as its name implies, cleancache is not suitable for dirty pages.>  39Cleancache has complete discretion over what pages to preserve and what>  40pages to discard and when.>  41>  42Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a>  43pool id which, if positive, must be saved in the filesystem's superblock;>  44a negative return value indicates failure.  A "put_page" will copy a>  45(presumably about-to-be-evicted) page into cleancache and associate it with>  46the pool id, a file key, and a page index into the file.  (The combination>  47of a pool id, a file key, and an index is sometimes called a "handle".)>  48A "get_page" will copy the page, if found, from cleancache into kernel memory.>  49An "invalidate_page" will ensure the page no longer is present in cleancache;>  50an "invalidate_inode" will invalidate all pages associated with the specified>  51file; and, when a filesystem is unmounted, an "invalidate_fs" will invalidate>  52all pages in all files specified by the given pool id and also surrender>  53the pool id.>  54>  55An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache>  56to treat the pool as shared using a 128-bit UUID as a key.  On systems>  57that may run multiple kernels (such as hard partitioned or virtualized>  58systems) that may share a clustered filesystem, and where cleancache>  59may be shared among those kernels, calls to init_shared_fs that specify the>  60same UUID will receive the same pool id, thus allowing the pages to>  61be shared.  Note that any security requirements must be imposed outside>  62of the kernel (e.g. by "tools" that control cleancache).  Or a>  63cleancache implementation can simply disable shared_init by always>  64returning a negative value.>  65>  66If a get_page is successful on a non-shared pool, the page is invalidated>  67(thus making cleancache an "exclusive" cache).  On a shared pool, the page>  68is NOT invalidated on a successful get_page so that it remains accessible to>  69other sharers.  The kernel is responsible for ensuring coherency between>  70cleancache (shared or not), the page cache, and the filesystem, using>  71cleancache invalidate operations as required.>  72>  73Note that cleancache must enforce put-put-get coherency and get-get>  74coherency.  For the former, if two puts are made to the same handle but>  75with different data, say AAA by the first put and BBB by the second, a>  76subsequent get can never return the stale data (AAA).  For get-get coherency,>  77if a get for a given handle fails, subsequent gets for that handle will>  78never succeed unless preceded by a successful put with that handle.>  79>  80Last, cleancache provides no SMP serialization guarantees; if two>  81different Linux threads are simultaneously putting and invalidating a page>  82with the same handle, the results are indeterminate.  Callers must>  83lock the page to ensure serial behavior.>  84>  85CLEANCACHE PERFORMANCE METRICS>  86>  87If properly configured, monitoring of cleancache is done via debugfs in>  88the /sys/kernel/debug/mm/cleancache directory.  The effectiveness of cleancache>  89can be measured (across all filesystems) with:>  90>  91succ_gets       - number of gets that were successful>  92failed_gets     - number of gets that failed>  93puts            - number of puts attempted (all "succeed")>  94invalidates     - number of invalidates attempted>  95>  96A backend implementation may provide additional metrics.>  97>  98FAQ>  99> 1001) Where's the value? (Andrew Morton)> 101> 102Cleancache provides a significant performance benefit to many workloads> 103in many environments with negligible overhead by improving the> 104effectiveness of the pagecache.  Clean pagecache pages are> 105saved in transcendent memory (RAM that is otherwise not directly> 106addressable to the kernel); fetching those pages later avoids "refaults"> 107and thus disk reads.> 108> 109Cleancache (and its sister code "frontswap") provide interfaces for> 111this transcendent memory (aka "tmem"), which conceptually lies between> 111fast kernel-directly-addressable RAM and slower DMA/asynchronous devices.> 112Disallowing direct kernel or userland reads/writes to tmem> 113is ideal when data is transformed to a different form and size (such> 114as with compression) or secretly moved (as might be useful for write-> 115balancing for some RAM-like devices).  Evicted page-cache pages (and> 116swap pages) are a great use for this kind of slower-than-RAM-but-much-> 117faster-than-disk transcendent memory, and the cleancache (and frontswap)> 118"page-object-oriented" specification provides a nice way to read and> 119write -- and indirectly "name" -- the pages.> 120> 121In the virtual case, the whole point of virtualization is to statistically> 122multiplex physical resources across the varying demands of multiple> 123virtual machines.  This is really hard to do with RAM and efforts to> 124do it well with no kernel change have essentially failed (except in some> 125well-publicized special-case workloads).  Cleancache -- and frontswap --> 126with a fairly small impact on the kernel, provide a huge amount> 127of flexibility for more dynamic, flexible RAM multiplexing.> 128Specifically, the Xen Transcendent Memory backend allows otherwise> 129"fallow" hypervisor-owned RAM to not only be "time-shared" between multiple> 130virtual machines, but the pages can be compressed and deduplicated to> 131optimize RAM utilization.  And when guest OS's are induced to surrender> 132underutilized RAM (e.g. with "self-ballooning"), page cache pages> 133are the first to go, and cleancache allows those pages to be> 134saved and reclaimed if overall host system memory conditions allow.> 135> 136And the identical interface used for cleancache can be used in> 137physical systems as well.  The zcache driver acts as a memory-hungry> 138device that stores pages of data in a compressed state.  And> 139the proposed "RAMster" driver shares RAM across multiple physical> 140systems.> 141> 1422) Why does cleancache have its sticky fingers so deep inside the> 143   filesystems and VFS? (Andrew Morton and Christoph Hellwig)> 144> 145The core hooks for cleancache in VFS are in most cases a single line> 146and the minimum set are placed precisely where needed to maintain> 147coherency (via cleancache_invalidate operations) between cleancache,> 148the page cache, and disk.  All hooks compile into nothingness if> 149cleancache is config'ed off and turn into a function-pointer-> 150compare-to-NULL if config'ed on but no backend claims the ops> 151functions, or to a compare-struct-element-to-negative if a> 152backend claims the ops functions but a filesystem doesn't enable> 153cleancache.> 154> 155Some filesystems are built entirely on top of VFS and the hooks> 156in VFS are sufficient, so don't require an "init_fs" hook; the> 157initial implementation of cleancache didn't provide this hook.> 158But for some filesystems (such as btrfs), the VFS hooks are> 159incomplete and one or more hooks in fs-specific code are required.> 160And for some other filesystems, such as tmpfs, cleancache may> 161be counterproductive.  So it seemed prudent to require a filesystem> 162to "opt in" to use cleancache, which requires adding a hook in> 163each filesystem.  Not all filesystems are supported by cleancache> 164only because they haven't been tested.  The existing set should> 165be sufficient to validate the concept, the opt-in approach means> 166that untested filesystems are not affected, and the hooks in the> 167existing filesystems should make it very easy to add more> 168filesystems in the future.> 169> 170The total impact of the hooks to existing fs and mm files is only> 171about 40 lines added (not counting comments and blank lines).> 172> 1733) Why not make cleancache asynchronous and batched so it can> 174   more easily interface with real devices with DMA instead> 175   of copying each individual page? (Minchan Kim)> 176> 177The one-page-at-a-time copy semantics simplifies the implementation> 178on both the frontend and backend and also allows the backend to> 179do fancy things on-the-fly like page compression and> 180page deduplication.  And since the data is "gone" (copied into/out> 181of the pageframe) before the cleancache get/put call returns,> 182a great deal of race conditions and potential coherency issues> 183are avoided.  While the interface seems odd for a "real device"> 184or for real kernel-addressable RAM, it makes perfect sense for> 185transcendent memory.> 186> 1874) Why is non-shared cleancache "exclusive"?  And where is the> 188   page "invalidated" after a "get"? (Minchan Kim)> 189> 190The main reason is to free up space in transcendent memory and> 191to avoid unnecessary cleancache_invalidate calls.  If you want inclusive,> 192the page can be "put" immediately following the "get".  If> 193put-after-get for inclusive becomes common, the interface could> 194be easily extended to add a "get_no_invalidate" call.> 195> 196The invalidate is done by the cleancache backend implementation.> 197> 1985) What's the performance impact?> 199> 200Performance analysis has been presented at OLS'09 and LCA'10.> 201Briefly, performance gains can be significant on most workloads,> 202especially when memory pressure is high (e.g. when RAM is> 203overcommitted in a virtual workload); and because the hooks are> 204invoked primarily in place of or in addition to a disk read/write,> 205overhead is negligible even in worst case workloads.  Basically> 206cleancache replaces I/O with memory-copy-CPU-overhead; on older> 207single-core systems with slow memory-copy speeds, cleancache> 208has little value, but in newer multirval machinei."> 165be sufficient to validate thoH"Documentation/vm/"326cleancache re2laces I/O with memory-copy-CPU-ovcumentation/vm/cleancach/"?/ion/vm/cleant to validit09 annd potm/cleancache.txt#L65" id="L65" class="line" name=2="L110"> 211this transcendent 2emory21/cleancache.txt#L121" id="L121" class="line" nam2="L111"> 211fast kernel-direct2y-add2essabl6) Hhe>do Iquot;mory-copy-C="Documthe ia href="DocXntaBoaz Harrashancache.txt#L189" id="L189" class="line" nam2="L112"> 212Disallowing direct2kerne21/cleancache.txt#L173" id="L173" class="line" nam2="L113"> 213is ideal when data2is tr2nsformFef="Documenmust b.> 214as with compressio2) or 21.>loadelf-bal;mory-copy-C=ways> 215balancing for some2RAM-l2ke devf you want inDocum 216swap pages) are a 2reat 2se forpoo proon/vm, and the hooksncy ani, cleuot; umentation/cleancache.txt#L156" id="L156" class="line" nam2="L117"> 217faster-than-disk t2ansce21ef="Docu/m> 218"page-object-2rient2d"s anm/clearef="Doctationion/vm/cleancache.txt#L154" id="L154" class="line" nam2="L119"> 219write -- and indir2ctly 2m/cleancache.txt#L20" id="L20" class="line" name=""L20">  211Transcendent memory tion/2m/clearely /vm/cs  211in Xen (using hyperv, the22/cleancache.txt#L142" id="L142" class="line" nam2"L22">  22<memory) and other imperne2ces ac-f="DoFSefmore>m/cle-tion/v-basumentntata ram-basumeFSefleancache.txt#L114" id="L114" class="line" nam2"L23">  233>  244FAQs are included bekerne2 chang-f=ef="Documentation//icipagevm/cDocumeFSency andent in a by cncache.txt#L189" id="L189" class="line" nam2""L115"> 25>  266IMPLEMENTATION OVERV impa2t on t euot;es is onldages lf="Dvamtati;?  And where is "Documentation/vm="Documentncache.txt#L179" id="L179" class="line" nam2""L117"> 27>  288A cleancache "ben Tr22quot; aon/vniquemultiple>eancacho the climes ORcleancache.txt#L188" id="L188" class="line" nam2"L29">  299to the kernel's hyper2isor-o eFSency aDocumenta  On a shnumen_fhntly imp't ancache.txt#L23" id="L23" class="line" name="LL20">  2130virtual machines, 2ut th2 pages-f="DoFSentatiotioneentatioion/vm/cle"Docucocumenta reqackereft valncache.txt#L23" id="L23" class="line" name="LL21">  2131optimize RAM utili2ation2  And  nd is><;es is onldages lf="Dvamtati;?  And wheecify imple So wancache.txt#L23" id="L23" class="line" name="LL22">  2232underutilized RAM 2e.g. 232amic-f=efmaxn guesicant on mo,specified>t"DoFSefmore>ncache.txt#L23" id="L23" class="line" name="LL23">  23<>  2434saved and reclaime2 if o23ith reaes is onldages lf="Dvamtati(cf.ref="Doncache.txt#L23" id="L23" class="line" name="L"L115"> 235>  2636And the identical 2nterf23 on t e.> 237physical systems a2 well2  The  C="Documtanypress  2838device that stores2pages2of dat- ; thleancachFSefmore> 239the proposed "2RAMst239th reaes ihose eocumsorkloads> 240systems.> 241> 2422) Why does cleanc2che h24/cleancache.txt#L173" id="L173" class="line" nam2="L143"> 243   filesystems and2VFS? 2AndrewI this hook.>eancache.txt#L173" id="L173" class="line" nam2=L24">  24<FAQs are included belif o24 of or age/imesminate.  Cal href="Dcore>eliDocumedd fBu hrefM ncache.txt#L23" id="L23" class="line" name="="L145"> 245The core hooks for2clean2ache iwfs" Docu"Documentmory-copy-CPUcache>cumentatiod int/cleancache.txt#L133" id="L133" class="line" nam2="L146"> 246and the minimum se2 are 2laced klot;) pentatBasicion/vatio paged9 and LCA&#unedmory.>t"Dncache.txt#L133" id="L133" class="line" nam2="L117"> 247coherency (via cle2ncach247of or age/vnvm/cllisericific o a dcumentatiottiod int/clevm/clocumentncache.txt#L133" id="L133" class="line" nam2=L28">  2848the page cache, an2 disk2  All entatra hved/truneadedd f a f this hook.> 249cleancache is conf2gཔ" dr> 250compare-to-NULL if2confi2'ekvaand ef in>ncache.txt#L205" id="L205" class="line" name="L141"> 251functions, or to a2compa2e-stru_page so tha deduplicaion/vatio pagedkvaawion/re done hreeancachm/clencache.txt#L205" id="L205" class="line" name="L142"> 252backend claims the2ops f25ot;toolshis hook.>ncache.txt#L205" id="L205" class="line" name="L143"> 253cleancache.>  2454> 255Some filesystems a2e bui25/cleancache.txt#L196" id="L196" class="line" nam2="L156"> 256in VFS are suffici2nt, s2 don"exca globcumvariclearn/vm/cleancache.txt#L199" id="L199" class="line" nam2="L157"> 257initial implementa2ion o25/cleancache.txt#L198" id="L198" class="line" nam2="L158"> 258But for some files2stems258he in VFS you want "inilaginto heckbecauselleeancachfrl>ncache.txt#L198" id="L198" class="line" nam2="L149"> 259incomplete and one2or mo25d off and turn ies isref="Doalancnuo/vm/csocumentatiotationeao hecktatiid="cncache.txt#L198" id="L198" class="line" nam2="L160"> 260And for some other2files2stems,variclea. St; (c and turn icso"ining.> 261be counterproducti2e.  S2 it sen a b_fs" m/clearclass="lin 262to "opt in&qu2t; to2use cleens-of-anousenta)eeante calls.  Imentatiotatiocumenon/vm/cd f a t"Dncache.txt#L133" id="L133" class="line" nam2="L163"> 263each filesystem.  2ot al2 filesglobcumvariclearDocumenclass="lin  2464only because they 2aven&239;t bable,/a hr insidn to many workloads> 265be sufficient to v2lidat2 the ca href=dwathardableancache.txt#L23" id="L23" class="line" name="="L166"> 266that untested file2ystem26/cleancache.txt#L187" id="L187" class="line" nam2="L167"> 267existing filesyste2s sho2ld mak9) Dfingers s"lin 268filesystems in the2futur26/cleancache.txt#L109" id="L109" class="line" nam2="L169"> 269>ncache.txt#L205" id="L205" class="line" name="L170"> 270The total impact o2 the 2ooks t="Docume> 271about 40 lines add2d (no2 count is high (eausen load); and bece supponcache.txt#L205" id="L205" class="line" name="L162"> 272> 2733) Why not make cl2ancac2e asyn10) Dfingers sc"lin  2474   more easily int2rface2with real en RAMm/cleadcopy-sref="Dweb brume for e(J.> 275   of copying each2indiv27/cleancache.txt#L196" id="L196" class="line" nam2="L176"> 276> 277The one-page-at-a-2ime c2py semappenmust bypaple> 278on both the fronte2d and27/cleancache.txt#L109" id="L109" class="line" nam2="L179"> 279do fancy things on2the-f2y likeLsstiupso th: Den Mntanheie samApril 13ins 1ncache.txt#L109" id="L109" class="line" nam2="L180"> 280page deduplication2  And2since 
page foef=r"> anticrigitionLXR seftwS andentaticache.txthttp://nds ofon/ge.nocumroon ps/lxr">LXR ); aunitynce hrefM exkloi9" idl"Doc/vm/cdencache.txtcumlto:lxr@l put.no">lxr@l put.nonce .
page subfoef=r"> lxr.l put.noch->Rlef="lly pmro ASnce hrDocumenref="L puttaon/suffref="Do="Documentase &qerheot; (c1995.