Lines Matching +full:dma +full:- +full:engine
1 /* SPDX-License-Identifier: MIT */
16 * bind engine, and return a handle to the user.
19 * ------------
33 * ----------
35 * DRM_XE_VM_BIND_OP_MAP - Create mapping for a BO
36 * DRM_XE_VM_BIND_OP_UNMAP - Destroy mapping for a BO / userptr
37 * DRM_XE_VM_BIND_OP_MAP_USERPTR - Create mapping for userptr
54 * .. code-block::
56 * bind BO0 0x0-0x1000
62 * bind BO1 0x201000-0x202000
66 * bind BO2 0x1ff000-0x201000
74 * bind can be done immediately (all in-fences satisfied, VM dma-resv kernel
78 * -------------
83 * ----------
95 * ------------------------
105 * -------------------------
122 * ---------------------
130 * In the bind IOCTL the user can optionally pass in an engine ID which must map
131 * to an engine which is of the special class DRM_XE_ENGINE_CLASS_VM_BIND.
132 * Underneath this is a really virtual engine that can run on any of the copy
134 * engine's ring. In the example above if A and B have different bind engines B
135 * is free to pass A. If the engine ID field is omitted, the default bind queue
141 * ------------------------
148 * engine makes this possible.
151 * ----------------------------
155 * .. code-block::
157 * 0x0000-0x2000 and 0x3000-0x5000 have mappings
158 * Munmap 0x1000-0x4000, results in mappings 0x0000-0x1000 and 0x4000-0x5000
163 * .. code-block::
165 * unbind 0x0000-0x2000
166 * unbind 0x3000-0x5000
167 * rebind 0x0000-0x1000
168 * rebind 0x4000-0x5000
170 * Why not just do a partial unbind of 0x1000-0x2000 and 0x3000-0x4000? This
178 * In this example there is a window of time where 0x0000-0x1000 and
179 * 0x4000-0x5000 are invalid but the user didn't ask for these addresses to be
186 * VM). The caveat is all dma-resv slots must be updated atomically with respect
188 * vm->lock in write mode from the first operation until the last.
191 * ----------------------------
208 * ------------
214 * idle to ensure no faults. This done by waiting on all of VM's dma-resv slots.
217 * -------
219 * Either the next exec (non-compute) or rebind worker (compute mode) will
221 * after the VM dma-resv wait if the VM is in compute mode.
230 * time a dma fence is allowed to exist for before signaling, as such dma fences
235 * --------------
239 * running on an engine, that batch can fault or cause a memory corruption as
242 * preempt fence it tells the submission backend to kick that engine off the
243 * hardware and the preempt fence signals when the engine is off the hardware.
247 * A preempt fence, for every engine using the VM, is installed into the VM's
248 * dma-resv DMA_RESV_USAGE_PREEMPT_FENCE slot. The same preempt fence, for every
249 * engine using the VM, is also installed into the same dma-resv slot of every
253 * -------------
262 * .. code-block::
264 * <----------------------------------------------------------------------|
268 * Lock VM dma-resv and external BOs dma-resv |
270 * Wait on and allocate new preempt fences for every engine using the VM |
273 * Wait VM's DMA_RESV_USAGE_KERNEL dma-resv slot |
274 * Install preeempt fences and issue resume for every engine using the VM |
278 * Wait all VM's dma-resv slots |
279 * Retry ----------------------------------------------------------
284 * -----------
286 * In order to prevent an engine from continuously being kicked off the hardware
287 * and making no forward progress an engine has a period of time it allowed to
289 * each engine a timeslice.
297 * When VM is created, a default bind engine and PT table structure are created
305 * various places plus exporting a composite fence for multi-GT binds to the
312 * page faults are enabled, using dma fences can potentially induce a deadlock:
313 * A pending page fault can hold up the GPU work which holds up the dma fence
315 * fault, but memory allocation is not allowed to gate dma fence signaling. As
316 * such, dma fences are not allowed when VM is in fault mode. Because dma-fences
321 * ----------------
329 * ------------------
332 * path of dma fences (no memory allocations are allowed, faults require memory
341 * the GT (1 per hardware engine) and kick a worker to process the faults. Since
354 * .. code-block::
361 * <----------------------------------------------------------------------|
363 * Lock VM & BO dma-resv locks |
368 * Drop VM & BO dma-resv locks |
369 * Retry ----------------------------------------------------------
375 * ---------------
389 * .. code-block::
395 * Lock VM & BO dma-resv locks
403 * -------------------------------------------------
425 * -----
427 * VM global lock (vm->lock) - rw semaphore lock. Outer most lock which protects
434 * VM dma-resv lock (vm->ttm.base.resv->lock) - WW lock. Protects VM dma-resv
439 * external BO dma-resv lock (bo->ttm.base.resv->lock) - WW lock. Protects
440 * external BO dma-resv slots. Expected to be acquired during VM binds (in
441 * addition to the VM dma-resv lock). All external BO dma-locks within a VM are
442 * expected to be acquired (in addition to the VM dma-resv lock) during execs
447 * -----------------------
450 * time (vm->lock).
453 * executing at the same time (vm->lock).
456 * the same VM is executing (vm->lock).
459 * compute mode rebind worker with the same VM is executing (vm->lock).
462 * executing (dma-resv locks).
465 * with the same VM is executing (dma-resv locks).
467 * dma-resv usage
473 * external BOs dma-resv slots. Let try to make this as clear as possible.
476 * -----------------
482 * 2. In non-compute mode, jobs from execs install themselves into the
485 * 3. In non-compute mode, jobs from execs install themselves into the
494 * 6. Every engine using a compute mode VM has a preempt fence in installed into
497 * 7. Every engine using a compute mode VM has a preempt fence in installed into
501 * ------------
507 * 2. In non-compute mode, the execution of all jobs from rebinds in execs shall
511 * 3. In non-compute mode, the execution of all jobs from execs shall wait on the
524 * -----------------------
527 * non-compute mode execs
529 * 2. New jobs from non-compute mode execs are blocked behind any existing jobs
535 * 4. Compute mode engine resumes are blocked behind any existing jobs from
547 * wait on the dma-resv kernel slots of VM or BO, technically we only have to