linux/Documentation/DMA-API.txt
<<
>>
Prefs
   1               Dynamic DMA mapping using the generic device
   2               ============================================
   3
   4        James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
   5
   6This document describes the DMA API.  For a more gentle introduction
   7phrased in terms of the pci_ equivalents (and actual examples) see
   8DMA-mapping.txt
   9
  10This API is split into two pieces.  Part I describes the API and the
  11corresponding pci_ API.  Part II describes the extensions to the API
  12for supporting non-consistent memory machines.  Unless you know that
  13your driver absolutely has to support non-consistent platforms (this
  14is usually only legacy platforms) you should only use the API
  15described in part I.
  16
  17Part I - pci_ and dma_ Equivalent API 
  18-------------------------------------
  19
  20To get the pci_ API, you must #include <linux/pci.h>
  21To get the dma_ API, you must #include <linux/dma-mapping.h>
  22
  23
  24Part Ia - Using large dma-coherent buffers
  25------------------------------------------
  26
  27void *
  28dma_alloc_coherent(struct device *dev, size_t size,
  29                             dma_addr_t *dma_handle, gfp_t flag)
  30void *
  31pci_alloc_consistent(struct pci_dev *dev, size_t size,
  32                             dma_addr_t *dma_handle)
  33
  34Consistent memory is memory for which a write by either the device or
  35the processor can immediately be read by the processor or device
  36without having to worry about caching effects.  (You may however need
  37to make sure to flush the processor's write buffers before telling
  38devices to read that memory.)
  39
  40This routine allocates a region of <size> bytes of consistent memory.
  41It also returns a <dma_handle> which may be cast to an unsigned
  42integer the same width as the bus and used as the physical address
  43base of the region.
  44
  45Returns: a pointer to the allocated region (in the processor's virtual
  46address space) or NULL if the allocation failed.
  47
  48Note: consistent memory can be expensive on some platforms, and the
  49minimum allocation length may be as big as a page, so you should
  50consolidate your requests for consistent memory as much as possible.
  51The simplest way to do that is to use the dma_pool calls (see below).
  52
  53The flag parameter (dma_alloc_coherent only) allows the caller to
  54specify the GFP_ flags (see kmalloc) for the allocation (the
  55implementation may choose to ignore flags that affect the location of
  56the returned memory, like GFP_DMA).  For pci_alloc_consistent, you
  57must assume GFP_ATOMIC behaviour.
  58
  59void
  60dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
  61                           dma_addr_t dma_handle)
  62void
  63pci_free_consistent(struct pci_dev *dev, size_t size, void *cpu_addr,
  64                           dma_addr_t dma_handle)
  65
  66Free the region of consistent memory you previously allocated.  dev,
  67size and dma_handle must all be the same as those passed into the
  68consistent allocate.  cpu_addr must be the virtual address returned by
  69the consistent allocate.
  70
  71Note that unlike their sibling allocation calls, these routines
  72may only be called with IRQs enabled.
  73
  74
  75Part Ib - Using small dma-coherent buffers
  76------------------------------------------
  77
  78To get this part of the dma_ API, you must #include <linux/dmapool.h>
  79
  80Many drivers need lots of small dma-coherent memory regions for DMA
  81descriptors or I/O buffers.  Rather than allocating in units of a page
  82or more using dma_alloc_coherent(), you can use DMA pools.  These work
  83much like a struct kmem_cache, except that they use the dma-coherent allocator,
  84not __get_free_pages().  Also, they understand common hardware constraints
  85for alignment, like queue heads needing to be aligned on N-byte boundaries.
  86
  87
  88        struct dma_pool *
  89        dma_pool_create(const char *name, struct device *dev,
  90                        size_t size, size_t align, size_t alloc);
  91
  92        struct pci_pool *
  93        pci_pool_create(const char *name, struct pci_device *dev,
  94                        size_t size, size_t align, size_t alloc);
  95
  96The pool create() routines initialize a pool of dma-coherent buffers
  97for use with a given device.  It must be called in a context which
  98can sleep.
  99
 100The "name" is for diagnostics (like a struct kmem_cache name); dev and size
 101are like what you'd pass to dma_alloc_coherent().  The device's hardware
 102alignment requirement for this type of data is "align" (which is expressed
 103in bytes, and must be a power of two).  If your device has no boundary
 104crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
 105from this pool must not cross 4KByte boundaries.
 106
 107
 108        void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
 109                        dma_addr_t *dma_handle);
 110
 111        void *pci_pool_alloc(struct pci_pool *pool, gfp_t gfp_flags,
 112                        dma_addr_t *dma_handle);
 113
 114This allocates memory from the pool; the returned memory will meet the size
 115and alignment requirements specified at creation time.  Pass GFP_ATOMIC to
 116prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
 117pass GFP_KERNEL to allow blocking.  Like dma_alloc_coherent(), this returns
 118two values:  an address usable by the cpu, and the dma address usable by the
 119pool's device.
 120
 121
 122        void dma_pool_free(struct dma_pool *pool, void *vaddr,
 123                        dma_addr_t addr);
 124
 125        void pci_pool_free(struct pci_pool *pool, void *vaddr,
 126                        dma_addr_t addr);
 127
 128This puts memory back into the pool.  The pool is what was passed to
 129the pool allocation routine; the cpu (vaddr) and dma addresses are what
 130were returned when that routine allocated the memory being freed.
 131
 132
 133        void dma_pool_destroy(struct dma_pool *pool);
 134
 135        void pci_pool_destroy(struct pci_pool *pool);
 136
 137The pool destroy() routines free the resources of the pool.  They must be
 138called in a context which can sleep.  Make sure you've freed all allocated
 139memory back to the pool before you destroy it.
 140
 141
 142Part Ic - DMA addressing limitations
 143------------------------------------
 144
 145int
 146dma_supported(struct device *dev, u64 mask)
 147int
 148pci_dma_supported(struct pci_dev *hwdev, u64 mask)
 149
 150Checks to see if the device can support DMA to the memory described by
 151mask.
 152
 153Returns: 1 if it can and 0 if it can't.
 154
 155Notes: This routine merely tests to see if the mask is possible.  It
 156won't change the current mask settings.  It is more intended as an
 157internal API for use by the platform than an external API for use by
 158driver writers.
 159
 160int
 161dma_set_mask(struct device *dev, u64 mask)
 162int
 163pci_set_dma_mask(struct pci_device *dev, u64 mask)
 164
 165Checks to see if the mask is possible and updates the device
 166parameters if it is.
 167
 168Returns: 0 if successful and a negative error if not.
 169
 170u64
 171dma_get_required_mask(struct device *dev)
 172
 173After setting the mask with dma_set_mask(), this API returns the
 174actual mask (within that already set) that the platform actually
 175requires to operate efficiently.  Usually this means the returned mask
 176is the minimum required to cover all of memory.  Examining the
 177required mask gives drivers with variable descriptor sizes the
 178opportunity to use smaller descriptors as necessary.
 179
 180Requesting the required mask does not alter the current mask.  If you
 181wish to take advantage of it, you should issue another dma_set_mask()
 182call to lower the mask again.
 183
 184
 185Part Id - Streaming DMA mappings
 186--------------------------------
 187
 188dma_addr_t
 189dma_map_single(struct device *dev, void *cpu_addr, size_t size,
 190                      enum dma_data_direction direction)
 191dma_addr_t
 192pci_map_single(struct pci_dev *hwdev, void *cpu_addr, size_t size,
 193                      int direction)
 194
 195Maps a piece of processor virtual memory so it can be accessed by the
 196device and returns the physical handle of the memory.
 197
 198The direction for both api's may be converted freely by casting.
 199However the dma_ API uses a strongly typed enumerator for its
 200direction:
 201
 202DMA_NONE                = PCI_DMA_NONE          no direction (used for
 203                                                debugging)
 204DMA_TO_DEVICE           = PCI_DMA_TODEVICE      data is going from the
 205                                                memory to the device
 206DMA_FROM_DEVICE         = PCI_DMA_FROMDEVICE    data is coming from
 207                                                the device to the
 208                                                memory
 209DMA_BIDIRECTIONAL       = PCI_DMA_BIDIRECTIONAL direction isn't known
 210
 211Notes:  Not all memory regions in a machine can be mapped by this
 212API.  Further, regions that appear to be physically contiguous in
 213kernel virtual space may not be contiguous as physical memory.  Since
 214this API does not provide any scatter/gather capability, it will fail
 215if the user tries to map a non-physically contiguous piece of memory.
 216For this reason, it is recommended that memory mapped by this API be
 217obtained only from sources which guarantee it to be physically contiguous
 218(like kmalloc).
 219
 220Further, the physical address of the memory must be within the
 221dma_mask of the device (the dma_mask represents a bit mask of the
 222addressable region for the device.  I.e., if the physical address of
 223the memory anded with the dma_mask is still equal to the physical
 224address, then the device can perform DMA to the memory).  In order to
 225ensure that the memory allocated by kmalloc is within the dma_mask,
 226the driver may specify various platform-dependent flags to restrict
 227the physical memory range of the allocation (e.g. on x86, GFP_DMA
 228guarantees to be within the first 16Mb of available physical memory,
 229as required by ISA devices).
 230
 231Note also that the above constraints on physical contiguity and
 232dma_mask may not apply if the platform has an IOMMU (a device which
 233supplies a physical to virtual mapping between the I/O memory bus and
 234the device).  However, to be portable, device driver writers may *not*
 235assume that such an IOMMU exists.
 236
 237Warnings:  Memory coherency operates at a granularity called the cache
 238line width.  In order for memory mapped by this API to operate
 239correctly, the mapped region must begin exactly on a cache line
 240boundary and end exactly on one (to prevent two separately mapped
 241regions from sharing a single cache line).  Since the cache line size
 242may not be known at compile time, the API will not enforce this
 243requirement.  Therefore, it is recommended that driver writers who
 244don't take special care to determine the cache line size at run time
 245only map virtual regions that begin and end on page boundaries (which
 246are guaranteed also to be cache line boundaries).
 247
 248DMA_TO_DEVICE synchronisation must be done after the last modification
 249of the memory region by the software and before it is handed off to
 250the driver.  Once this primitive is used, memory covered by this
 251primitive should be treated as read-only by the device.  If the device
 252may write to it at any point, it should be DMA_BIDIRECTIONAL (see
 253below).
 254
 255DMA_FROM_DEVICE synchronisation must be done before the driver
 256accesses data that may be changed by the device.  This memory should
 257be treated as read-only by the driver.  If the driver needs to write
 258to it at any point, it should be DMA_BIDIRECTIONAL (see below).
 259
 260DMA_BIDIRECTIONAL requires special handling: it means that the driver
 261isn't sure if the memory was modified before being handed off to the
 262device and also isn't sure if the device will also modify it.  Thus,
 263you must always sync bidirectional memory twice: once before the
 264memory is handed off to the device (to make sure all memory changes
 265are flushed from the processor) and once before the data may be
 266accessed after being used by the device (to make sure any processor
 267cache lines are updated with data that the device may have changed).
 268
 269void
 270dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
 271                 enum dma_data_direction direction)
 272void
 273pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
 274                 size_t size, int direction)
 275
 276Unmaps the region previously mapped.  All the parameters passed in
 277must be identical to those passed in (and returned) by the mapping
 278API.
 279
 280dma_addr_t
 281dma_map_page(struct device *dev, struct page *page,
 282                    unsigned long offset, size_t size,
 283                    enum dma_data_direction direction)
 284dma_addr_t
 285pci_map_page(struct pci_dev *hwdev, struct page *page,
 286                    unsigned long offset, size_t size, int direction)
 287void
 288dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
 289               enum dma_data_direction direction)
 290void
 291pci_unmap_page(struct pci_dev *hwdev, dma_addr_t dma_address,
 292               size_t size, int direction)
 293
 294API for mapping and unmapping for pages.  All the notes and warnings
 295for the other mapping APIs apply here.  Also, although the <offset>
 296and <size> parameters are provided to do partial page mapping, it is
 297recommended that you never use these unless you really know what the
 298cache width is.
 299
 300int
 301dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
 302
 303int
 304pci_dma_mapping_error(struct pci_dev *hwdev, dma_addr_t dma_addr)
 305
 306In some circumstances dma_map_single and dma_map_page will fail to create
 307a mapping. A driver can check for these errors by testing the returned
 308dma address with dma_mapping_error(). A non-zero return value means the mapping
 309could not be created and the driver should take appropriate action (e.g.
 310reduce current DMA mapping usage or delay and try again later).
 311
 312        int
 313        dma_map_sg(struct device *dev, struct scatterlist *sg,
 314                int nents, enum dma_data_direction direction)
 315        int
 316        pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
 317                int nents, int direction)
 318
 319Maps a scatter gather list from the block layer.
 320
 321Returns: the number of physical segments mapped (this may be shorter
 322than <nents> passed in if the block layer determines that some
 323elements of the scatter/gather list are physically adjacent and thus
 324may be mapped with a single entry).
 325
 326Please note that the sg cannot be mapped again if it has been mapped once.
 327The mapping process is allowed to destroy information in the sg.
 328
 329As with the other mapping interfaces, dma_map_sg can fail. When it
 330does, 0 is returned and a driver must take appropriate action. It is
 331critical that the driver do something, in the case of a block driver
 332aborting the request or even oopsing is better than doing nothing and
 333corrupting the filesystem.
 334
 335With scatterlists, you use the resulting mapping like this:
 336
 337        int i, count = dma_map_sg(dev, sglist, nents, direction);
 338        struct scatterlist *sg;
 339
 340        for (i = 0, sg = sglist; i < count; i++, sg++) {
 341                hw_address[i] = sg_dma_address(sg);
 342                hw_len[i] = sg_dma_len(sg);
 343        }
 344
 345where nents is the number of entries in the sglist.
 346
 347The implementation is free to merge several consecutive sglist entries
 348into one (e.g. with an IOMMU, or if several pages just happen to be
 349physically contiguous) and returns the actual number of sg entries it
 350mapped them to. On failure 0, is returned.
 351
 352Then you should loop count times (note: this can be less than nents times)
 353and use sg_dma_address() and sg_dma_len() macros where you previously
 354accessed sg->address and sg->length as shown above.
 355
 356        void
 357        dma_unmap_sg(struct device *dev, struct scatterlist *sg,
 358                int nhwentries, enum dma_data_direction direction)
 359        void
 360        pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
 361                int nents, int direction)
 362
 363Unmap the previously mapped scatter/gather list.  All the parameters
 364must be the same as those and passed in to the scatter/gather mapping
 365API.
 366
 367Note: <nents> must be the number you passed in, *not* the number of
 368physical entries returned.
 369
 370void
 371dma_sync_single(struct device *dev, dma_addr_t dma_handle, size_t size,
 372                enum dma_data_direction direction)
 373void
 374pci_dma_sync_single(struct pci_dev *hwdev, dma_addr_t dma_handle,
 375                           size_t size, int direction)
 376void
 377dma_sync_sg(struct device *dev, struct scatterlist *sg, int nelems,
 378                          enum dma_data_direction direction)
 379void
 380pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg,
 381                       int nelems, int direction)
 382
 383Synchronise a single contiguous or scatter/gather mapping.  All the
 384parameters must be the same as those passed into the single mapping
 385API.
 386
 387Notes:  You must do this:
 388
 389- Before reading values that have been written by DMA from the device
 390  (use the DMA_FROM_DEVICE direction)
 391- After writing values that will be written to the device using DMA
 392  (use the DMA_TO_DEVICE) direction
 393- before *and* after handing memory to the device if the memory is
 394  DMA_BIDIRECTIONAL
 395
 396See also dma_map_single().
 397
 398dma_addr_t
 399dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
 400                     enum dma_data_direction dir,
 401                     struct dma_attrs *attrs)
 402
 403void
 404dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
 405                       size_t size, enum dma_data_direction dir,
 406                       struct dma_attrs *attrs)
 407
 408int
 409dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
 410                 int nents, enum dma_data_direction dir,
 411                 struct dma_attrs *attrs)
 412
 413void
 414dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
 415                   int nents, enum dma_data_direction dir,
 416                   struct dma_attrs *attrs)
 417
 418The four functions above are just like the counterpart functions
 419without the _attrs suffixes, except that they pass an optional
 420struct dma_attrs*.
 421
 422struct dma_attrs encapsulates a set of "dma attributes". For the
 423definition of struct dma_attrs see linux/dma-attrs.h.
 424
 425The interpretation of dma attributes is architecture-specific, and
 426each attribute should be documented in Documentation/DMA-attributes.txt.
 427
 428If struct dma_attrs* is NULL, the semantics of each of these
 429functions is identical to those of the corresponding function
 430without the _attrs suffix. As a result dma_map_single_attrs()
 431can generally replace dma_map_single(), etc.
 432
 433As an example of the use of the *_attrs functions, here's how
 434you could pass an attribute DMA_ATTR_FOO when mapping memory
 435for DMA:
 436
 437#include <linux/dma-attrs.h>
 438/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
 439 * documented in Documentation/DMA-attributes.txt */
 440...
 441
 442        DEFINE_DMA_ATTRS(attrs);
 443        dma_set_attr(DMA_ATTR_FOO, &attrs);
 444        ....
 445        n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
 446        ....
 447
 448Architectures that care about DMA_ATTR_FOO would check for its
 449presence in their implementations of the mapping and unmapping
 450routines, e.g.:
 451
 452void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
 453                             size_t size, enum dma_data_direction dir,
 454                             struct dma_attrs *attrs)
 455{
 456        ....
 457        int foo =  dma_get_attr(DMA_ATTR_FOO, attrs);
 458        ....
 459        if (foo)
 460                /* twizzle the frobnozzle */
 461        ....
 462
 463
 464Part II - Advanced dma_ usage
 465-----------------------------
 466
 467Warning: These pieces of the DMA API have no PCI equivalent.  They
 468should also not be used in the majority of cases, since they cater for
 469unlikely corner cases that don't belong in usual drivers.
 470
 471If you don't understand how cache line coherency works between a
 472processor and an I/O device, you should not be using this part of the
 473API at all.
 474
 475void *
 476dma_alloc_noncoherent(struct device *dev, size_t size,
 477                               dma_addr_t *dma_handle, gfp_t flag)
 478
 479Identical to dma_alloc_coherent() except that the platform will
 480choose to return either consistent or non-consistent memory as it sees
 481fit.  By using this API, you are guaranteeing to the platform that you
 482have all the correct and necessary sync points for this memory in the
 483driver should it choose to return non-consistent memory.
 484
 485Note: where the platform can return consistent memory, it will
 486guarantee that the sync points become nops.
 487
 488Warning:  Handling non-consistent memory is a real pain.  You should
 489only ever use this API if you positively know your driver will be
 490required to work on one of the rare (usually non-PCI) architectures
 491that simply cannot make consistent memory.
 492
 493void
 494dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
 495                              dma_addr_t dma_handle)
 496
 497Free memory allocated by the nonconsistent API.  All parameters must
 498be identical to those passed in (and returned by
 499dma_alloc_noncoherent()).
 500
 501int
 502dma_is_consistent(struct device *dev, dma_addr_t dma_handle)
 503
 504Returns true if the device dev is performing consistent DMA on the memory
 505area pointed to by the dma_handle.
 506
 507int
 508dma_get_cache_alignment(void)
 509
 510Returns the processor cache alignment.  This is the absolute minimum
 511alignment *and* width that you must observe when either mapping
 512memory or doing partial flushes.
 513
 514Notes: This API may return a number *larger* than the actual cache
 515line, but it will guarantee that one or more cache lines fit exactly
 516into the width returned by this call.  It will also always be a power
 517of two for easy alignment.
 518
 519void
 520dma_sync_single_range(struct device *dev, dma_addr_t dma_handle,
 521                      unsigned long offset, size_t size,
 522                      enum dma_data_direction direction)
 523
 524Does a partial sync, starting at offset and continuing for size.  You
 525must be careful to observe the cache alignment and width when doing
 526anything like this.  You must also be extra careful about accessing
 527memory you intend to sync partially.
 528
 529void
 530dma_cache_sync(struct device *dev, void *vaddr, size_t size,
 531               enum dma_data_direction direction)
 532
 533Do a partial sync of memory that was allocated by
 534dma_alloc_noncoherent(), starting at virtual address vaddr and
 535continuing on for size.  Again, you *must* observe the cache line
 536boundaries when doing this.
 537
 538int
 539dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
 540                            dma_addr_t device_addr, size_t size, int
 541                            flags)
 542
 543Declare region of memory to be handed out by dma_alloc_coherent when
 544it's asked for coherent memory for this device.
 545
 546bus_addr is the physical address to which the memory is currently
 547assigned in the bus responding region (this will be used by the
 548platform to perform the mapping).
 549
 550device_addr is the physical address the device needs to be programmed
 551with actually to address this memory (this will be handed out as the
 552dma_addr_t in dma_alloc_coherent()).
 553
 554size is the size of the area (must be multiples of PAGE_SIZE).
 555
 556flags can be or'd together and are:
 557
 558DMA_MEMORY_MAP - request that the memory returned from
 559dma_alloc_coherent() be directly writable.
 560
 561DMA_MEMORY_IO - request that the memory returned from
 562dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
 563
 564One or both of these flags must be present.
 565
 566DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
 567dma_alloc_coherent of any child devices of this one (for memory residing
 568on a bridge).
 569
 570DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. 
 571Do not allow dma_alloc_coherent() to fall back to system memory when
 572it's out of memory in the declared region.
 573
 574The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
 575must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
 576if only DMA_MEMORY_MAP were passed in) for success or zero for
 577failure.
 578
 579Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
 580dma_alloc_coherent() may no longer be accessed directly, but instead
 581must be accessed using the correct bus functions.  If your driver
 582isn't prepared to handle this contingency, it should not specify
 583DMA_MEMORY_IO in the input flags.
 584
 585As a simplification for the platforms, only *one* such region of
 586memory may be declared per device.
 587
 588For reasons of efficiency, most platforms choose to track the declared
 589region only at the granularity of a page.  For smaller allocations,
 590you should use the dma_pool() API.
 591
 592void
 593dma_release_declared_memory(struct device *dev)
 594
 595Remove the memory region previously declared from the system.  This
 596API performs *no* in-use checking for this region and will return
 597unconditionally having removed all the required structures.  It is the
 598driver's job to ensure that no parts of this memory region are
 599currently in use.
 600
 601void *
 602dma_mark_declared_memory_occupied(struct device *dev,
 603                                  dma_addr_t device_addr, size_t size)
 604
 605This is used to occupy specific regions of the declared space
 606(dma_alloc_coherent() will hand out the first free region it finds).
 607
 608device_addr is the *device* address of the region requested.
 609
 610size is the size (and should be a page-sized multiple).
 611
 612The return value will be either a pointer to the processor virtual
 613address of the memory, or an error (via PTR_ERR()) if any part of the
 614region is occupied.
 615