Lines Matching +full:port +full:- +full:mapping

1 .. SPDX-License-Identifier: GPL-2.0
19 subsystem using a port. The port transfer type must be configured to be
47 segments representing the mapping of the command data buffer on the host.
57 -----------------------
65 mapping of a queue PCI address range to the local CPU address space.
78 ------------------
89 Mimimum number of PCI Address Mapping Windows Required
90 ------------------------------------------------------
92 Most PCI endpoint controllers provide a limited number of mapping windows for
93 mapping a PCI address range to local CPU memory addresses. The NVMe PCI
94 endpoint target controllers uses mapping windows for the following.
96 1) One memory window for raising MSI or MSI-X interrupts
103 queues that can be supported is equal to the total number of memory mapping
106 queues can be safely operated without any risk of getting PCI address mapping
110 -----------------------------
120 the number of MSI-X or MSI vectors available.
122 PCI mapping windows minus 2 (see above).
127 Limitations and NVMe Specification Non-Compliance
128 -------------------------------------------------
142 -------------------
152 To facilitate testing, enabling the null-blk driver (CONFIG_BLK_DEV_NULL_BLK)
157 ---------------------
165 a40000000.pcie-ep
170 a40000000.pcie-ep
173 with RX-TX signal swapped. If the host PCI slot used does not have
174 plug-and-play capabilities, the host should be powered off when the NVMe PCI
178 --------------------
181 subsystem and port must be defined. Second, the NVMe PCI endpoint device must
182 be setup and bound to the subsystem and port created.
184 Creating a NVMe Subsystem and Port
185 ----------------------------------
187 Details about how to configure a NVMe target subsystem and port are outside the
188 scope of this document. The following only provides a simple example of a port
193 # mount -t configfs none /sys/kernel/config
210 Now, create a subsystem and a port that we will use to create a PCI target
212 example, the port is created with a maximum of 4 I/O queue pairs::
216 # echo -n "Linux-pci-epf" > nvmepf.0.nqn/attr_model
226 # echo -n "/dev/nullb0" > nvmepf.0.nqn/namespaces/1/device_path
229 Finally, create the target port and link it to the subsystem::
233 # echo -n "pci" > 1/addr_trtype
234 # ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \
238 -----------------------------------
240 With the NVMe target subsystem and port ready for use, the NVMe PCI endpoint
242 should already be loaded (that is done automatically when the port is created)::
266 If the PCI endpoint controller used does not support MSI-X, MSI can be
271 Next, let's bind our endpoint device with the target subsystem and port that we
281 # ln -s functions/nvmet_pci_epf/nvmepf.0 controllers/a40000000.pcie-ep/
282 # echo 1 > controllers/a40000000.pcie-ep/start
287 .. code-block:: text
292 nvmet_pci_epf nvmet_pci_epf.0: PCI endpoint controller supports MSI-X, 32 vectors
293 …roller 1 for subsystem nvmepf.0.nqn for NQN nqn.2014-08.org.nvmexpress:uuid:2ab90791-2246-4fbb-961…
296 PCI Root-Complex Host
297 ---------------------
309 # lspci -n
322 # nvme id-ctrl /dev/nvme0
327 mn : Linux-pci-epf
328 fr : 6.13.0-r
350 subclass_code Must be 0x08 (Non-Volatile Memory controller)
357 interrupt_pin Interrupt PIN to use if MSI and MSI-X are not supported
366 portid The ID of the target port to use