Lines Matching +full:iommu +full:- +full:specifier
10 with example pseudo-code. For a concise description of the API, see
11 Documentation/core-api/dma-api.rst.
39 supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40 so devices only need to use 32-bit DMA addresses.
49 +-------+ +------+ +------+
52 C +-------+ --------> B +------+ ----------> +------+ A
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
60 X +-------+ --------> Y +------+ <---------- +------+ Z
61 | | mapping | RAM | by IOMMU
64 +-------+ +------+
82 Y. But in many others, there is IOMMU hardware that translates DMA
85 an interface like dma_map_single(), which sets up any required IOMMU
87 do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
100 bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
105 #include <linux/dma-mapping.h>
137 buffers were cacheline-aligned. Without that, you'd see cacheline
138 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
152 By default, the kernel assumes that your device can address 32-bits of DMA
153 addressing. For a 64-bit capable device, this needs to be increased, and for
156 Special note about PCI: PCI-X specification requires PCI-X devices to support
157 64-bit addressing (DAC) for all transactions. And at least one platform (SGI
158 SN2) requires 64-bit consistent allocations to operate correctly when the IO
159 bus is in PCI-X mode.
184 device struct of your device is embedded in the bus-specific device struct of
185 your device. For example, &pdev->dev is a pointer to the device struct of a
191 system. If it returns non-zero, your device cannot perform DMA properly on
198 1) Use some non-DMA mode for data transfer, if possible.
206 The 24-bit addressing device would do something like this::
213 The standard 64-bit addressing device would do something like this::
233 If the device only supports 32-bit addressing for descriptors in the
234 coherent allocations, but supports full 64-bits for streaming mappings
247 Finally, if your device can only drive the low 24-bits of
251 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
268 Here is pseudo-code showing how this might be done::
278 card->playback_enabled = 1;
280 card->playback_enabled = 0;
282 card->name);
285 card->record_enabled = 1;
287 card->record_enabled = 0;
289 card->name);
301 - Consistent DMA mappings which are usually mapped at driver
316 - Network card DMA ring descriptors.
317 - SCSI adapter mailbox command data structures.
318 - Device firmware microcode executed out of
334 desc->word0 = address;
336 desc->word1 = DESC_VALID;
345 - Streaming DMA mappings which are usually mapped for one DMA
354 - Networking buffers transmitted/received by a device.
355 - Filesystem buffers written/read by a SCSI device.
364 Also, systems with caches that aren't DMA-coherent will work better
389 which is 32-bit addressable. Even if the device indicates (via the DMA mask)
390 that it may address the upper 32-bits, consistent allocation will only
391 return > 32-bit addresses for DMA if the consistent DMA mask has been
493 potential platform-specific optimizations of such) is for debugging.
510 specifier. For receive packets, just the opposite, map/unmap them
511 with the DMA_FROM_DEVICE direction specifier.
523 struct device *dev = &my_dev->dev;
525 void *addr = buffer->ptr;
526 size_t size = buffer->len;
558 struct device *dev = &my_dev->dev;
560 struct page *page = buffer->page;
561 unsigned long offset = buffer->offset;
562 size_t size = buffer->len;
601 ends and the second one starts on a page boundary - in fact this is a huge
602 advantage for cards which either cannot do scatter-gather or have very
603 limited number of scatter-gather entries) and returns the actual number
608 accessed sg->address and sg->length as shown above.
629 properly in order for the CPU and device to see the most up-to-date and
674 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
675 if (dma_mapping_error(cp->dev, mapping)) {
684 cp->rx_buf = buffer;
685 cp->rx_len = len;
686 cp->rx_dma = mapping;
706 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
707 cp->rx_len,
711 hp = (struct my_card_header *) cp->rx_buf;
713 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
715 pass_to_upper_layers(cp->rx_buf);
719 * DMA_FROM_DEVICE-mapped area,
736 - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
738 - checking the dma_addr_t returned from dma_map_single() and dma_map_page()
753 - unmap pages that are already mapped, when mapping error occurs in the middle
867 ringp->mapping = FOO;
868 ringp->len = BAR;
878 dma_unmap_single(dev, ringp->mapping, ringp->len,
888 It really should be self-explanatory. We treat the ADDR and LEN
902 supports IOMMUs (including software IOMMU).
907 DMA-safe. Drivers and subsystems depend on it. If an architecture
908 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
916 alignment constraints (e.g. the alignment constraints about 64-bit
935 David Mosberger-Tang <[email protected]>