1 DMA Buffer Sharing API Guide 2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 3 4 Sumit Semwal 5 <sumit dot semwal at linaro dot org> 6 <sumit dot semwal at ti dot com> 7 8This document serves as a guide to device-driver writers on what is the dma-buf 9buffer sharing API, how to use it for exporting and using shared buffers. 10 11Any device driver which wishes to be a part of DMA buffer sharing, can do so as 12either the 'exporter' of buffers, or the 'user' of buffers. 13 14Say a driver A wants to use buffers created by driver B, then we call B as the 15exporter, and A as buffer-user. 16 17The exporter 18- implements and manages operations[1] for the buffer 19- allows other users to share the buffer by using dma_buf sharing APIs, 20- manages the details of buffer allocation, 21- decides about the actual backing storage where this allocation happens, 22- takes care of any migration of scatterlist - for all (shared) users of this 23 buffer, 24 25The buffer-user 26- is one of (many) sharing users of the buffer. 27- doesn't need to worry about how the buffer is allocated, or where. 28- needs a mechanism to get access to the scatterlist that makes up this buffer 29 in memory, mapped into its own address space, so it can access the same area 30 of memory. 31 32dma-buf operations for device dma only 33-------------------------------------- 34 35The dma_buf buffer sharing API usage contains the following steps: 36 371. Exporter announces that it wishes to export a buffer 382. Userspace gets the file descriptor associated with the exported buffer, and 39 passes it around to potential buffer-users based on use case 403. Each buffer-user 'connects' itself to the buffer 414. When needed, buffer-user requests access to the buffer from exporter 425. When finished with its use, the buffer-user notifies end-of-DMA to exporter 436. when buffer-user is done using this buffer completely, it 'disconnects' 44 itself from the buffer. 45 46 471. Exporter's announcement of buffer export 48 49 The buffer exporter announces its wish to export a buffer. In this, it 50 connects its own private buffer data, provides implementation for operations 51 that can be performed on the exported dma_buf, and flags for the file 52 associated with this buffer. 53 54 Interface: 55 struct dma_buf *dma_buf_export(void *priv, struct dma_buf_ops *ops, 56 size_t size, int flags) 57 58 If this succeeds, dma_buf_export allocates a dma_buf structure, and returns a 59 pointer to the same. It also associates an anonymous file with this buffer, 60 so it can be exported. On failure to allocate the dma_buf object, it returns 61 NULL. 62 632. Userspace gets a handle to pass around to potential buffer-users 64 65 Userspace entity requests for a file-descriptor (fd) which is a handle to the 66 anonymous file associated with the buffer. It can then share the fd with other 67 drivers and/or processes. 68 69 Interface: 70 int dma_buf_fd(struct dma_buf *dmabuf) 71 72 This API installs an fd for the anonymous file associated with this buffer; 73 returns either 'fd', or error. 74 753. Each buffer-user 'connects' itself to the buffer 76 77 Each buffer-user now gets a reference to the buffer, using the fd passed to 78 it. 79 80 Interface: 81 struct dma_buf *dma_buf_get(int fd) 82 83 This API will return a reference to the dma_buf, and increment refcount for 84 it. 85 86 After this, the buffer-user needs to attach its device with the buffer, which 87 helps the exporter to know of device buffer constraints. 88 89 Interface: 90 struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf, 91 struct device *dev) 92 93 This API returns reference to an attachment structure, which is then used 94 for scatterlist operations. It will optionally call the 'attach' dma_buf 95 operation, if provided by the exporter. 96 97 The dma-buf sharing framework does the bookkeeping bits related to managing 98 the list of all attachments to a buffer. 99 100Until this stage, the buffer-exporter has the option to choose not to actually 101allocate the backing storage for this buffer, but wait for the first buffer-user 102to request use of buffer for allocation. 103 104 1054. When needed, buffer-user requests access to the buffer 106 107 Whenever a buffer-user wants to use the buffer for any DMA, it asks for 108 access to the buffer using dma_buf_map_attachment API. At least one attach to 109 the buffer must have happened before map_dma_buf can be called. 110 111 Interface: 112 struct sg_table * dma_buf_map_attachment(struct dma_buf_attachment *, 113 enum dma_data_direction); 114 115 This is a wrapper to dma_buf->ops->map_dma_buf operation, which hides the 116 "dma_buf->ops->" indirection from the users of this interface. 117 118 In struct dma_buf_ops, map_dma_buf is defined as 119 struct sg_table * (*map_dma_buf)(struct dma_buf_attachment *, 120 enum dma_data_direction); 121 122 It is one of the buffer operations that must be implemented by the exporter. 123 It should return the sg_table containing scatterlist for this buffer, mapped 124 into caller's address space. 125 126 If this is being called for the first time, the exporter can now choose to 127 scan through the list of attachments for this buffer, collate the requirements 128 of the attached devices, and choose an appropriate backing storage for the 129 buffer. 130 131 Based on enum dma_data_direction, it might be possible to have multiple users 132 accessing at the same time (for reading, maybe), or any other kind of sharing 133 that the exporter might wish to make available to buffer-users. 134 135 map_dma_buf() operation can return -EINTR if it is interrupted by a signal. 136 137 1385. When finished, the buffer-user notifies end-of-DMA to exporter 139 140 Once the DMA for the current buffer-user is over, it signals 'end-of-DMA' to 141 the exporter using the dma_buf_unmap_attachment API. 142 143 Interface: 144 void dma_buf_unmap_attachment(struct dma_buf_attachment *, 145 struct sg_table *); 146 147 This is a wrapper to dma_buf->ops->unmap_dma_buf() operation, which hides the 148 "dma_buf->ops->" indirection from the users of this interface. 149 150 In struct dma_buf_ops, unmap_dma_buf is defined as 151 void (*unmap_dma_buf)(struct dma_buf_attachment *, struct sg_table *); 152 153 unmap_dma_buf signifies the end-of-DMA for the attachment provided. Like 154 map_dma_buf, this API also must be implemented by the exporter. 155 156 1576. when buffer-user is done using this buffer, it 'disconnects' itself from the 158 buffer. 159 160 After the buffer-user has no more interest in using this buffer, it should 161 disconnect itself from the buffer: 162 163 - it first detaches itself from the buffer. 164 165 Interface: 166 void dma_buf_detach(struct dma_buf *dmabuf, 167 struct dma_buf_attachment *dmabuf_attach); 168 169 This API removes the attachment from the list in dmabuf, and optionally calls 170 dma_buf->ops->detach(), if provided by exporter, for any housekeeping bits. 171 172 - Then, the buffer-user returns the buffer reference to exporter. 173 174 Interface: 175 void dma_buf_put(struct dma_buf *dmabuf); 176 177 This API then reduces the refcount for this buffer. 178 179 If, as a result of this call, the refcount becomes 0, the 'release' file 180 operation related to this fd is called. It calls the dmabuf->ops->release() 181 operation in turn, and frees the memory allocated for dmabuf when exported. 182 183NOTES: 184- Importance of attach-detach and {map,unmap}_dma_buf operation pairs 185 The attach-detach calls allow the exporter to figure out backing-storage 186 constraints for the currently-interested devices. This allows preferential 187 allocation, and/or migration of pages across different types of storage 188 available, if possible. 189 190 Bracketing of DMA access with {map,unmap}_dma_buf operations is essential 191 to allow just-in-time backing of storage, and migration mid-way through a 192 use-case. 193 194- Migration of backing storage if needed 195 If after 196 - at least one map_dma_buf has happened, 197 - and the backing storage has been allocated for this buffer, 198 another new buffer-user intends to attach itself to this buffer, it might 199 be allowed, if possible for the exporter. 200 201 In case it is allowed by the exporter: 202 if the new buffer-user has stricter 'backing-storage constraints', and the 203 exporter can handle these constraints, the exporter can just stall on the 204 map_dma_buf until all outstanding access is completed (as signalled by 205 unmap_dma_buf). 206 Once all users have finished accessing and have unmapped this buffer, the 207 exporter could potentially move the buffer to the stricter backing-storage, 208 and then allow further {map,unmap}_dma_buf operations from any buffer-user 209 from the migrated backing-storage. 210 211 If the exporter cannot fulfil the backing-storage constraints of the new 212 buffer-user device as requested, dma_buf_attach() would return an error to 213 denote non-compatibility of the new buffer-sharing request with the current 214 buffer. 215 216 If the exporter chooses not to allow an attach() operation once a 217 map_dma_buf() API has been called, it simply returns an error. 218 219Kernel cpu access to a dma-buf buffer object 220-------------------------------------------- 221 222The motivation to allow cpu access from the kernel to a dma-buf object from the 223importers side are: 224- fallback operations, e.g. if the devices is connected to a usb bus and the 225 kernel needs to shuffle the data around first before sending it away. 226- full transparency for existing users on the importer side, i.e. userspace 227 should not notice the difference between a normal object from that subsystem 228 and an imported one backed by a dma-buf. This is really important for drm 229 opengl drivers that expect to still use all the existing upload/download 230 paths. 231 232Access to a dma_buf from the kernel context involves three steps: 233 2341. Prepare access, which invalidate any necessary caches and make the object 235 available for cpu access. 2362. Access the object page-by-page with the dma_buf map apis 2373. Finish access, which will flush any necessary cpu caches and free reserved 238 resources. 239 2401. Prepare access 241 242 Before an importer can access a dma_buf object with the cpu from the kernel 243 context, it needs to notify the exporter of the access that is about to 244 happen. 245 246 Interface: 247 int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, 248 size_t start, size_t len, 249 enum dma_data_direction direction) 250 251 This allows the exporter to ensure that the memory is actually available for 252 cpu access - the exporter might need to allocate or swap-in and pin the 253 backing storage. The exporter also needs to ensure that cpu access is 254 coherent for the given range and access direction. The range and access 255 direction can be used by the exporter to optimize the cache flushing, i.e. 256 access outside of the range or with a different direction (read instead of 257 write) might return stale or even bogus data (e.g. when the exporter needs to 258 copy the data to temporary storage). 259 260 This step might fail, e.g. in oom conditions. 261 2622. Accessing the buffer 263 264 To support dma_buf objects residing in highmem cpu access is page-based using 265 an api similar to kmap. Accessing a dma_buf is done in aligned chunks of 266 PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which returns 267 a pointer in kernel virtual address space. Afterwards the chunk needs to be 268 unmapped again. There is no limit on how often a given chunk can be mapped 269 and unmapped, i.e. the importer does not need to call begin_cpu_access again 270 before mapping the same chunk again. 271 272 Interfaces: 273 void *dma_buf_kmap(struct dma_buf *, unsigned long); 274 void dma_buf_kunmap(struct dma_buf *, unsigned long, void *); 275 276 There are also atomic variants of these interfaces. Like for kmap they 277 facilitate non-blocking fast-paths. Neither the importer nor the exporter (in 278 the callback) is allowed to block when using these. 279 280 Interfaces: 281 void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long); 282 void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *); 283 284 For importers all the restrictions of using kmap apply, like the limited 285 supply of kmap_atomic slots. Hence an importer shall only hold onto at most 2 286 atomic dma_buf kmaps at the same time (in any given process context). 287 288 dma_buf kmap calls outside of the range specified in begin_cpu_access are 289 undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on 290 the partial chunks at the beginning and end but may return stale or bogus 291 data outside of the range (in these partial chunks). 292 293 Note that these calls need to always succeed. The exporter needs to complete 294 any preparations that might fail in begin_cpu_access. 295 296 For some cases the overhead of kmap can be too high, a vmap interface 297 is introduced. This interface should be used very carefully, as vmalloc 298 space is a limited resources on many architectures. 299 300 Interfaces: 301 void *dma_buf_vmap(struct dma_buf *dmabuf) 302 void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) 303 304 The vmap call can fail if there is no vmap support in the exporter, or if it 305 runs out of vmalloc space. Fallback to kmap should be implemented. 306 3073. Finish access 308 309 When the importer is done accessing the range specified in begin_cpu_access, 310 it needs to announce this to the exporter (to facilitate cache flushing and 311 unpinning of any pinned resources). The result of of any dma_buf kmap calls 312 after end_cpu_access is undefined. 313 314 Interface: 315 void dma_buf_end_cpu_access(struct dma_buf *dma_buf, 316 size_t start, size_t len, 317 enum dma_data_direction dir); 318 319 320Direct Userspace Access/mmap Support 321------------------------------------ 322 323Being able to mmap an export dma-buf buffer object has 2 main use-cases: 324- CPU fallback processing in a pipeline and 325- supporting existing mmap interfaces in importers. 326 3271. CPU fallback processing in a pipeline 328 329 In many processing pipelines it is sometimes required that the cpu can access 330 the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid 331 the need to handle this specially in userspace frameworks for buffer sharing 332 it's ideal if the dma_buf fd itself can be used to access the backing storage 333 from userspace using mmap. 334 335 Furthermore Android's ION framework already supports this (and is otherwise 336 rather similar to dma-buf from a userspace consumer side with using fds as 337 handles, too). So it's beneficial to support this in a similar fashion on 338 dma-buf to have a good transition path for existing Android userspace. 339 340 No special interfaces, userspace simply calls mmap on the dma-buf fd. 341 3422. Supporting existing mmap interfaces in exporters 343 344 Similar to the motivation for kernel cpu access it is again important that 345 the userspace code of a given importing subsystem can use the same interfaces 346 with a imported dma-buf buffer object as with a native buffer object. This is 347 especially important for drm where the userspace part of contemporary OpenGL, 348 X, and other drivers is huge, and reworking them to use a different way to 349 mmap a buffer rather invasive. 350 351 The assumption in the current dma-buf interfaces is that redirecting the 352 initial mmap is all that's needed. A survey of some of the existing 353 subsystems shows that no driver seems to do any nefarious thing like syncing 354 up with outstanding asynchronous processing on the device or allocating 355 special resources at fault time. So hopefully this is good enough, since 356 adding interfaces to intercept pagefaults and allow pte shootdowns would 357 increase the complexity quite a bit. 358 359 Interface: 360 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, 361 unsigned long); 362 363 If the importing subsystem simply provides a special-purpose mmap call to set 364 up a mapping in userspace, calling do_mmap with dma_buf->file will equally 365 achieve that for a dma-buf object. 366 3673. Implementation notes for exporters 368 369 Because dma-buf buffers have invariant size over their lifetime, the dma-buf 370 core checks whether a vma is too large and rejects such mappings. The 371 exporter hence does not need to duplicate this check. 372 373 Because existing importing subsystems might presume coherent mappings for 374 userspace, the exporter needs to set up a coherent mapping. If that's not 375 possible, it needs to fake coherency by manually shooting down ptes when 376 leaving the cpu domain and flushing caches at fault time. Note that all the 377 dma_buf files share the same anon inode, hence the exporter needs to replace 378 the dma_buf file stored in vma->vm_file with it's own if pte shootdown is 379 requred. This is because the kernel uses the underlying inode's address_space 380 for vma tracking (and hence pte tracking at shootdown time with 381 unmap_mapping_range). 382 383 If the above shootdown dance turns out to be too expensive in certain 384 scenarios, we can extend dma-buf with a more explicit cache tracking scheme 385 for userspace mappings. But the current assumption is that using mmap is 386 always a slower path, so some inefficiencies should be acceptable. 387 388 Exporters that shoot down mappings (for any reasons) shall not do any 389 synchronization at fault time with outstanding device operations. 390 Synchronization is an orthogonal issue to sharing the backing storage of a 391 buffer and hence should not be handled by dma-buf itself. This is explictly 392 mentioned here because many people seem to want something like this, but if 393 different exporters handle this differently, buffer sharing can fail in 394 interesting ways depending upong the exporter (if userspace starts depending 395 upon this implicit synchronization). 396 397Miscellaneous notes 398------------------- 399 400- Any exporters or users of the dma-buf buffer sharing framework must have 401 a 'select DMA_SHARED_BUFFER' in their respective Kconfigs. 402 403- In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be set 404 on the file descriptor. This is not just a resource leak, but a 405 potential security hole. It could give the newly exec'd application 406 access to buffers, via the leaked fd, to which it should otherwise 407 not be permitted access. 408 409 The problem with doing this via a separate fcntl() call, versus doing it 410 atomically when the fd is created, is that this is inherently racy in a 411 multi-threaded app[3]. The issue is made worse when it is library code 412 opening/creating the file descriptor, as the application may not even be 413 aware of the fd's. 414 415 To avoid this problem, userspace must have a way to request O_CLOEXEC 416 flag be set when the dma-buf fd is created. So any API provided by 417 the exporting driver to create a dmabuf fd must provide a way to let 418 userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd(). 419 420- If an exporter needs to manually flush caches and hence needs to fake 421 coherency for mmap support, it needs to be able to zap all the ptes pointing 422 at the backing storage. Now linux mm needs a struct address_space associated 423 with the struct file stored in vma->vm_file to do that with the function 424 unmap_mapping_range. But the dma_buf framework only backs every dma_buf fd 425 with the anon_file struct file, i.e. all dma_bufs share the same file. 426 427 Hence exporters need to setup their own file (and address_space) association 428 by setting vma->vm_file and adjusting vma->vm_pgoff in the dma_buf mmap 429 callback. In the specific case of a gem driver the exporter could use the 430 shmem file already provided by gem (and set vm_pgoff = 0). Exporters can then 431 zap ptes by unmapping the corresponding range of the struct address_space 432 associated with their own file. 433 434References: 435[1] struct dma_buf_ops in include/linux/dma-buf.h 436[2] All interfaces mentioned above defined in include/linux/dma-buf.h 437[3] https://lwn.net/Articles/236486/ 438