linux/Documentation/dmaengine.txt
<<
"v2./op/spa32. /op/form2. /opa "v2./o href="../linux+v >.6/Documenta.6.3/dmaengine.txt">"v2./opimg src="../.sta.6c/gfx/right.png" alt=">>">"vp/spa32."vpspa3 class="lxr_search">"v2."v2./opinput typ"v2./opinput typ"v2./opbutt> typSearch 2. /op/form2. p/spa32."vpspa3 class="lxr_prefs"2. /opa href="+prefs?return=Documenta.6.3/dmaengine.txt""v2./o onclick="return ajax_prefs();">"v2./oPrefs. /op/a>"vp/spa32.2./o op/div2.2./o opform ac.6.3="ajax+*" method="post" onsubmit="return false;">"vpinput typ"2./o op/form2."2./o opdiv class="headingbott>m">o o1p/a> DMA Engine API Guide o o2p/a> ==================== o o3p/a>"o o4p/a> Vinod Koul <vinod dot koul at intel.com>"o o5p/a>"o o6p/a>NOTE: For DMA Engine usage in async_tx please see:"o o7p/a> Documenta.6.3/crypto/async-tx-api.txt"o o8p/a>"o o9p/a>"o >a>Below is a guide to device driver writers on how to use the Slave-DMA API of the"o 11p/a>DMA Engine. This is applicable only for slave DMA usage only."o 12p/a>"o 13p/a>The slave DMA usage consists of following steps:"o 14p/a>1. Allocate a DMA slave channel"o 15p/a>2. Set slave and controller specific paramo 16p/a>3. Get a descriptor for transac.6.3"o 17p/a>4. Submit the transac.6.3"o 18p/a>5. Issue pending requests and wait for callback notifica.6.3"o 19p/a>"o 20p/a>1. Allocate a DMA slave channel"o 21p/a>"o 22p/a> Channel allocatn> is slightly different in the slave DMA context,"o 23p/a> client drivers typically need a channel from a par.6cular DMA"o 24p/a> controller only and even in some cases a specific channel is desired."o 25p/a> To request a channel dma_request_channel() API is used."o 26p/a>"o 27p/a> Interface:"o 28p/a> struct dma_chan *dma_request_channel(dma_cap_mask_t mask,"o 29p/a> dma_filter_fn filter_fn,"o 30p/a> void *filter_param);"o 31p/a> where dma_filter_fn is defined as:"o 32p/a> typo 33p/a>"o 34p/a> The 'filter_fn' paramo 35p/a> slave and cyclic channels as they typically need to obtain a specific"o 36p/a> DMA channel."o 37p/a>"o 38p/a> When the v2.6.3al 'filter_fn' paramo 39p/a> simply returns the first channel that sa.6sfies the capability mask."o 40p/a>"o 41p/a> Otherwise, the 'filter_fn' routine will be called once for each free"o 42p/a> channel which has a capability in 'mask'. 'filter_fn' is expected to"o 43p/a> return 'true' when the desired DMA channel is found."o 44p/a>"o 45p/a> A channel allocated via this interface is exclusive to the caller,"o 46p/a> until dma_release_channel() is called."o 47p/a>"o 48p/a>2. Set slave and controller specific paramo 49p/a>"o 50p/a> Next step is always to pass some specific informatn> to the DMA"o 51p/a> driver. Most of the generic informatn> which a slave DMA can use"o 52p/a> is in struct dma_slave_config. This allows the clients to specify"o 53p/a> DMA direc.6.3, DMA addresses, bus widths, DMA burst lengths etc"o 54p/a> for the peripheral."o 55p/a>"o 56p/a> If some DMA controllers have more paramo 57p/a> should try to embed struct dma_slave_config in their controller"o 58p/a> specific structure. That gives flexibility to client to pass more"o 59p/a> paramo 60p/a>"o 61p/a> Interface:"o 62p/a> int dmaengine_slave_config(struct dma_chan *chan,"o 63p/a> struct dma_slave_config *config)"o 64p/a>"o 65p/a> Please see the dma_slave_config structure definitn> in dmaengine.h"o 66p/a> for a detailed explanatn> of the struct members. Please note"o 67p/a> that the 'direc.6.3' member will be going away as it duplicates the"o 68p/a> direc.6.3 given in the prepare call."o 69p/a>"o 70p/a>3. Get a descriptor for transac.6.3"o 71p/a>"o 72p/a> For slave usage the various modes of slave transfers supported by the"o 73p/a> DMA-engine are:"o 74p/a>"o 75p/a> slave_sg - DMA a list of scatter gather buffers from/to a peripheral"o 76p/a> dma_cyclic - Perform a cyclic DMA operatn> from/to a peripheral till the"o 77p/a> operatn> is explicitly stopped."o 78p/a> interleaved_dma - This is comm> to Slave as well as M2M clients. For slave"o 79p/a> address of devices' fifo could be already know to the driver."o 80p/a> Various typ s could be expressed by setting"o 81p/a> appropriate o 82p/a>"o 83p/a> A non-NULL return of this transfer API represents a "descriptor" for"o 84p/a> the given transac.6.3."o 85p/a>"o 86p/a> Interface:"o 87p/a> struct dma_async_tx_descriptor *(*chan->device->device_prep_slave_sg)("o 88p/a> struct dma_chan *chan, struct scatterlist *sgl,"o 89p/a> unsigned int sg_len, enum dma_data_direc.6.3 direc.6.3,"o 90p/a> unsigned long flags);"o 91p/a>"o 92p/a> struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)("o 93p/a> struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,"o 94p/a> size_t period_len, enum dma_data_direc.6.3 direc.6.3);"o 95p/a>"o 96p/a> struct dma_async_tx_descriptor *(*device_prep_interleaved_dma)("o 97p/a> struct dma_chan *chan, struct dma_interleaved_template *xt,"o 98p/a> unsigned long flags);"o 99p/a>"o100p/a> The peripheral driver is expected to have mapped the scatterlist for"o101p/a> the DMA operatn> prior to calling device_prep_slave_sg, and must"o102p/a> keep the scatterlist mapped until the DMA operatn> has completed."o103p/a> The scatterlist must be mapped using the DMA struct device. So,"o104p/a> normal setup should look like this:"o105p/a>"o106p/a> nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);"o107p/a> if (nr_sg == 0)"o108p/a> /* error */"o109p/a>"o110p/a> desc = chan->device->device_prep_slave_sg(chan, sgl, nr_sg,"o111p/a> direc.6.3, flags);"o112p/a>"o113p/a> Once a descriptor has been obtained, the callback informatn> can be"o114p/a> added and the descriptor must then be submitted. Some DMA engine"o115p/a> drivers may hold a spinlock between a successful preparatn> and"o116p/a> submissn> so it is important that these two operatn> s are closely"o117p/a> paired."o118p/a>"o119p/a> Note:"o120p/a> Although the async_tx API specifies that completn> callback"o121p/a> routines cannot submit any new operatn> s, this is not the"o122p/a> case for slave/cyclic DMA."o123p/a>"o124p/a> For slave DMA, the subsequent transac.6.3 may not be available"o125p/a> for submissn> prior to callback func.6.3 being invoked, so"o126p/a> slave DMA callbacks are permitted to prepare and submit a new"o127p/a> transac.6.3."o128p/a>"o129p/a> For cyclic DMA, a callback func.6.3 may wish to terminate the"o130p/a> DMA via dmaengine_terminate_all()."o131p/a>"o132p/a> Therefore, it is important that DMA engine drivers drop any"o133p/a> locks before calling the callback func.6.3 which may cause a"o134p/a> deadlock."o135p/a>"o136p/a> Note that callbacks will always be invoked from the DMA"o137p/a> engines tasklet, never from interrupt context."o138p/a>"o139p/a>4. Submit the transac.6.3"o140p/a>"o141p/a> Once the descriptor has been prepared and the callback informatn> "o142p/a> added, it must be placed on the DMA engine drivers pending queue."o143p/a>"o144p/a> Interface:"o145p/a> dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)"o146p/a>"o147p/a> This returns a cookie can be used to check the progress of DMA engine"o148p/a> ac.6vity via other DMA engine calls not covered in this document."o149p/a>"o150p/a> dmaengine_submit() will not start the DMA operatn> , it merely adds"o151p/a> it to the pending queue. For this, see step 5, dma_async_issue_pending."o152p/a>"o153p/a>5. Issue pending DMA requests and wait for callback notifica.6.3"o154p/a>"o155p/a> The transac.6.3s in the pending queue can be ac.6vated by calling the"o156p/a> issue_pending API. If channel is idle then the first transac.6.3 i3"o157p/a> queue is started and subsequent ones queued up."o158p/a>"o159p/a> On completn> of each DMA operatn> , the next in queue is started and"o160p/a> a tasklet triggered. The tasklet will the call the client driver"o161p/a> completn> callback routine for notifica.6.3, if set."o162p/a>"o163p/a> Interface:"o164p/a> void dma_async_issue_pending(struct dma_chan *chan);"o165p/a>"o166p/a>Further APIs:"o167p/a>"o168p/a>1. int dmaengine_terminate_all(struct dma_chan *chan)"o169p/a>"o170p/a> This causes all ac.6vity for the DMA channel to be stopped, and may"o171p/a> discard data in the DMA FIFO which has3't been fully transferred."o172p/a> No callback func.6.3s will be called for any incomplete transfers."o173p/a>"o174p/a>2. int dmaengine_pause(struct dma_chan *chan)"o175p/a>"o176p/a> This pauses ac.6vity on the DMA channel without data loss."o177p/a>"o178p/a>3. int dmaengine_resume(struct dma_chan *chan)"o179p/a>"o180p/a> Resume a previously paused DMA channel. It is invalid to resume a"o181p/a> channel which is not currently paused."o182p/a>"o183p/a>4. enum dma_sta.us dma_async_is_tx_complete(struct dma_chan *chan,"o184p/a> dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)"o185p/a>"o186p/a> This can be used to check the sta.us of the channel. Please see"o187p/a> the documentatn> i i clude/linux/dmaengine.h for a more complete"o188p/a> description of this API."o189p/a>"o190p/a> This can be used in conjunc.6.3 with dma_async_is_complete() and"o191p/a> the cookie returned from 'descriptor->submit()' to check for"o192p/a> completn> of a specific DMA transac.6.3."o193p/a>"o194p/a> Note:"o195p/a> Not all DMA engine drivers can return reliable informatn> for"o196p/a> a running DMA channel. It is recommended that DMA engine users"o197p/a> pause or stop (via dmaengine_terminate_all) the channel before"o198p/a> using this API."o199p/a> The original LXR software by theopa href="http://sourceforge.net/projects/lxr">LXR communityp/a>, this experimental vers6.3 byopa href="mailto:lxr@linux.no">lxr@linux.nop/a>. p/div2.pdiv class="subfooter"> lxr.linux.no kindly hosted by pa href="http://www.redpill-linpro.no">Redpill Linpro ASp/a>, provider of Linux consulting and operatn> s services sinceo1995. p/div2. p/body2.p/html2.