Lines Matching +full:dma +full:- +full:queues
1 .. SPDX-License-Identifier: GPL-2.0
27 host. However, using the PCI endpoint framework API and DMA API, the driver is
32 1) The driver manages retrieval of NVMe commands in submission queues using DMA
36 constantly poll the doorbell of all submission queues to detect command
39 2) The driver transfers completion queues entries of completed commands to the
49 PCI address segments using DMA, if supported. If DMA is not supported, MMIO
57 -----------------------
64 CQR bit to request "Contiguous Queues Required". This is to facilitate the
78 ------------------
84 The maximum number of queues and the maximum data transfer size (MDTS) are
90 ------------------------------------------------------
96 1) One memory window for raising MSI or MSI-X interrupts
103 queues that can be supported is equal to the total number of memory mapping
106 queues can be safely operated without any risk of getting PCI address mapping
110 -----------------------------
114 and multiple I/O queues. The maximum of number of I/O queues pairs that can be
117 1) The NVMe target core code limits the maximum number of I/O queues to the
120 the number of MSI-X or MSI vectors available.
121 3) The total number of completion queues must not exceed the total number of
127 Limitations and NVMe Specification Non-Compliance
128 -------------------------------------------------
131 not support multiple submission queues using the same completion queue. All
132 submission queues must specify a unique completion queue.
142 -------------------
152 To facilitate testing, enabling the null-blk driver (CONFIG_BLK_DEV_NULL_BLK)
157 ---------------------
165 a40000000.pcie-ep
170 a40000000.pcie-ep
173 with RX-TX signal swapped. If the host PCI slot used does not have
174 plug-and-play capabilities, the host should be powered off when the NVMe PCI
178 --------------------
185 ----------------------------------
193 # mount -t configfs none /sys/kernel/config
216 # echo -n "Linux-pci-epf" > nvmepf.0.nqn/attr_model
226 # echo -n "/dev/nullb0" > nvmepf.0.nqn/namespaces/1/device_path
233 # echo -n "pci" > 1/addr_trtype
234 # ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \
238 -----------------------------------
266 If the PCI endpoint controller used does not support MSI-X, MSI can be
281 # ln -s functions/nvmet_pci_epf/nvmepf.0 controllers/a40000000.pcie-ep/
282 # echo 1 > controllers/a40000000.pcie-ep/start
287 .. code-block:: text
292 nvmet_pci_epf nvmet_pci_epf.0: PCI endpoint controller supports MSI-X, 32 vectors
293 …roller 1 for subsystem nvmepf.0.nqn for NQN nqn.2014-08.org.nvmexpress:uuid:2ab90791-2246-4fbb-961…
294 nvmet_pci_epf nvmet_pci_epf.0: New PCI ctrl "nvmepf.0.nqn", 4 I/O queues, mdts 524288 B
296 PCI Root-Complex Host
297 ---------------------
309 # lspci -n
322 # nvme id-ctrl /dev/nvme0
327 mn : Linux-pci-epf
328 fr : 6.13.0-r
350 subclass_code Must be 0x08 (Non-Volatile Memory controller)
357 interrupt_pin Interrupt PIN to use if MSI and MSI-X are not supported