1 // Copyright (c) 2016 The vulkano developers
2 // Licensed under the Apache License, Version 2.0
3 // <LICENSE-APACHE or
4 // https://www.apache.org/licenses/LICENSE-2.0> or the MIT
5 // license <LICENSE-MIT or https://opensource.org/licenses/MIT>,
6 // at your option. All files in the project carrying such
7 // notice may not be copied, modified, or distributed except
8 // according to those terms.
9
10 //! In Vulkan, suballocation of [`DeviceMemory`] is left to the application, because every
11 //! application has slightly different needs and one can not incorporate an allocator into the
12 //! driver that would perform well in all cases. Vulkano stays true to this sentiment, but aims to
13 //! reduce the burden on the user as much as possible. You have a toolbox of configurable
14 //! [suballocators] to choose from that cover all allocation algorithms, which you can compose into
15 //! any kind of [hierarchy] you wish. This way you have maximum flexibility while still only using
16 //! a few `DeviceMemory` blocks and not writing any of the very error-prone code.
17 //!
18 //! If you just want to allocate memory and don't have any special needs, look no further than the
19 //! [`StandardMemoryAllocator`].
20 //!
21 //! # Why not just allocate `DeviceMemory`?
22 //!
23 //! But the driver has an allocator! Otherwise you wouldn't be able to allocate `DeviceMemory`,
24 //! right? Indeed, but that allocation is very expensive. Not only that, there is also a pretty low
25 //! limit on the number of allocations by the drivers. See, everything in Vulkan tries to keep you
26 //! away from allocating `DeviceMemory` too much. These limits are used by the implementation to
27 //! optimize on its end, while the application optimizes on the other end.
28 //!
29 //! # Alignment
30 //!
31 //! At the end of the day, memory needs to be backed by hardware somehow. A *memory cell* stores a
32 //! single *bit*, bits are grouped into *bytes* and bytes are grouped into *words*. Intuitively, it
33 //! should make sense that accessing single bits at a time would be very inefficient. That is why
34 //! computers always access a whole word of memory at once, at least. That means that if you tried
35 //! to do an unaligned access, you would need to access twice the number of memory locations.
36 //!
37 //! Example aligned access, performing bitwise NOT on the (64-bit) word at offset 0x08:
38 //!
39 //! ```plain
40 //! | 08 | 10 | 18
41 //! ----+-------------------------+-------------------------+----
42 //! ••• | 35 35 35 35 35 35 35 35 | 01 23 45 67 89 ab cd ef | •••
43 //! ----+-------------------------+-------------------------+----
44 //! , | ,
45 //! +------------|------------+
46 //! ' v '
47 //! ----+-------------------------+-------------------------+----
48 //! ••• | ca ca ca ca ca ca ca ca | 01 23 45 67 89 ab cd ef | •••
49 //! ----+-------------------------+-------------------------+----
50 //! ```
51 //!
52 //! Same example as above, but this time unaligned with a word at offset 0x0a:
53 //!
54 //! ```plain
55 //! | 08 0a | 10 | 18
56 //! ----+-------------------------+-------------------------+----
57 //! ••• | cd ef 35 35 35 35 35 35 | 35 35 01 23 45 67 89 ab | •••
58 //! ----+-------------------------+-------------------------+----
59 //! , | ,
60 //! +------------|------------+
61 //! ' v '
62 //! ----+-------------------------+-------------------------+----
63 //! ••• | cd ef ca ca ca ca ca ca | ca ca 01 23 45 67 89 ab | •••
64 //! ----+-------------------------+-------------------------+----
65 //! ```
66 //!
67 //! As you can see, in the unaligned case the hardware would need to read both the word at offset
68 //! 0x08 and the word at the offset 0x10 and then shift the bits from one register into the other.
69 //! Safe to say it should to be avoided, and this is why we need alignment. This example also goes
70 //! to show how inefficient unaligned writes are. Say you pieced together your word as described,
71 //! and now you want to perform the bitwise NOT and write the result back. Difficult, isn't it?
72 //! That's due to the fact that even though the chunks occupy different ranges in memory, they are
73 //! still said to *alias* each other, because if you try to write to one memory location, you would
74 //! be overwriting 2 or more different chunks of data.
75 //!
76 //! ## Pages
77 //!
78 //! It doesn't stop at the word, though. Words are further grouped into *pages*. These are
79 //! typically power-of-two multiples of the word size, much like words are typically powers of two
80 //! themselves. You can easily extend the concepts from the previous examples to pages if you think
81 //! of the examples as having a page size of 1 word. Two resources are said to alias if they share
82 //! a page, and therefore should be aligned to the page size. What the page size is depends on the
83 //! context, and a computer might have multiple different ones for different parts of hardware.
84 //!
85 //! ## Memory requirements
86 //!
87 //! A Vulkan device might have any number of reasons it would want certain alignments for certain
88 //! resources. For example, the device might have different caches for different types of
89 //! resources, which have different page sizes. Maybe the device wants to store images in some
90 //! other cache compared to buffers which needs different alignment. Or maybe images of different
91 //! layouts require different alignment, or buffers with different usage/mapping do. The specifics
92 //! don't matter in the end, this just goes to illustrate the point. This is why memory
93 //! requirements in Vulkan vary not only with the Vulkan implementation, but also with the type of
94 //! resource.
95 //!
96 //! ## Buffer-image granularity
97 //!
98 //! This unfortunately named granularity is the page size which a linear resource neighboring a
99 //! non-linear resource must be aligned to in order for them not to alias. The difference between
100 //! the memory requirements of the individual resources and the [buffer-image granularity] is that
101 //! the memory requirements only apply to the resource they are for, while the buffer-image
102 //! granularity applies to two neighboring resources. For example, you might create two buffers,
103 //! which might have two different memory requirements, but as long as those are satisfied, you can
104 //! put these buffers cheek to cheek. On the other hand, if one of them is an (optimal layout)
105 //! image, then they must not share any page, whose size is given by this granularity. The Vulkan
106 //! implementation can use this for additional optimizations if it needs to, or report a
107 //! granularity of 1.
108 //!
109 //! # Fragmentation
110 //!
111 //! Memory fragmentation refers to the wastage of memory that results from alignment requirements
112 //! and/or dynamic memory allocation. As such, some level of fragmentation is always going to be
113 //! inevitable. Different allocation algorithms each have their own characteristics and trade-offs
114 //! in relation to fragmentation.
115 //!
116 //! ## Internal Fragmentation
117 //!
118 //! This type of fragmentation arises from alignment requirements. These might be imposed by the
119 //! Vulkan implementation or the application itself.
120 //!
121 //! Say for example your allocations need to be aligned to 64B, then any allocation whose size is
122 //! not a multiple of the alignment will need padding at the end:
123 //!
124 //! ```plain
125 //! | 0x040 | 0x080 | 0x0c0 | 0x100
126 //! ----+------------------+------------------+------------------+--------
127 //! | ############ | ################ | ######## | #######
128 //! ••• | ### 48 B ### | ##### 64 B ##### | # 32 B # | ### •••
129 //! | ############ | ################ | ######## | #######
130 //! ----+------------------+------------------+------------------+--------
131 //! ```
132 //!
133 //! If this alignment is imposed by the Vulkan implementation, then there's nothing one can do
134 //! about this. Simply put, that space is unusable. One also shouldn't want to do anything about
135 //! it, since these requirements have very good reasons, as described in further detail in previous
136 //! sections. They prevent resources from aliasing so that performance is optimal.
137 //!
138 //! It might seem strange that the application would want to cause internal fragmentation itself,
139 //! but this is often a good trade-off to reduce or even completely eliminate external
140 //! fragmentation. Internal fragmentation is very predictable, which makes it easier to deal with.
141 //!
142 //! ## External fragmentation
143 //!
144 //! With external fragmentation, what happens is that while the allocations might be using their
145 //! own memory totally efficiently, the way they are arranged in relation to each other would
146 //! prevent a new contiguous chunk of memory to be allocated even though there is enough free space
147 //! left. That is why this fragmentation is said to be external to the allocations. Also, the
148 //! allocations together with the fragments in-between add overhead both in terms of space and time
149 //! to the allocator, because it needs to keep track of more things overall.
150 //!
151 //! As an example, take these 4 allocations within some block, with the rest of the block assumed
152 //! to be full:
153 //!
154 //! ```plain
155 //! +-----+-------------------+-------+-----------+-- - - --+
156 //! | | | | | |
157 //! | A | B | C | D | ••• |
158 //! | | | | | |
159 //! +-----+-------------------+-------+-----------+-- - - --+
160 //! ```
161 //!
162 //! The allocations were all done in order, and naturally there is no fragmentation at this point.
163 //! Now if we free B and D, since these are done out of order, we will be left with holes between
164 //! the other allocations, and we won't be able to fit allocation E anywhere:
165 //!
166 //! ```plain
167 //! +-----+-------------------+-------+-----------+-- - - --+ +-------------------------+
168 //! | | | | | | ? | |
169 //! | A | | C | | ••• | <== | E |
170 //! | | | | | | | |
171 //! +-----+-------------------+-------+-----------+-- - - --+ +-------------------------+
172 //! ```
173 //!
174 //! So fine, we use a different block for E, and just use this block for allocations that fit:
175 //!
176 //! ```plain
177 //! +-----+---+-----+---------+-------+-----+-----+-- - - --+
178 //! | | | | | | | | |
179 //! | A | H | I | J | C | F | G | ••• |
180 //! | | | | | | | | |
181 //! +-----+---+-----+---------+-------+-----+-----+-- - - --+
182 //! ```
183 //!
184 //! Sure, now let's free some shall we? And voilà, the problem just became much worse:
185 //!
186 //! ```plain
187 //! +-----+---+-----+---------+-------+-----+-----+-- - - --+
188 //! | | | | | | | | |
189 //! | A | | I | J | | F | | ••• |
190 //! | | | | | | | | |
191 //! +-----+---+-----+---------+-------+-----+-----+-- - - --+
192 //! ```
193 //!
194 //! # Leakage
195 //!
196 //! Memory leaks happen when allocations are kept alive past their shelf life. This most often
197 //! occurs because of [cyclic references]. If you have structures that have cycles, then make sure
198 //! you read the documentation for [`Arc`]/[`Rc`] carefully to avoid memory leaks. You can also
199 //! introduce memory leaks willingly by using [`mem::forget`] or [`Box::leak`] to name a few. In
200 //! all of these examples the memory can never be reclaimed, but that doesn't have to be the case
201 //! for something to be considered a leak. Say for example you have a [region] which you
202 //! suballocate, and at some point you drop all the suballocations. When that happens, the region
203 //! can be returned (freed) to the next level up the hierarchy, or it can be reused by another
204 //! suballocator. But if you happen to keep alive just one suballocation for the duration of the
205 //! program for instance, then the whole region is also kept as it is for that time (and keep in
206 //! mind this bubbles up the hierarchy). Therefore, for the program, that memory might be a leak
207 //! depending on the allocator, because some allocators wouldn't be able to reuse the entire rest
208 //! of the region. You must always consider the lifetime of your resources when choosing the
209 //! appropriate allocator.
210 //!
211 //! [suballocators]: Suballocator
212 //! [hierarchy]: Suballocator#memory-hierarchies
213 //! [buffer-image granularity]: crate::device::Properties::buffer_image_granularity
214 //! [cyclic references]: Arc#breaking-cycles-with-weak
215 //! [`Rc`]: std::rc::Rc
216 //! [`mem::forget`]: std::mem::forget
217 //! [region]: Suballocator#regions
218
219 mod layout;
220 pub mod suballocator;
221
222 use self::array_vec::ArrayVec;
223 pub use self::{
224 layout::DeviceLayout,
225 suballocator::{
226 AllocationType, BuddyAllocator, BumpAllocator, FreeListAllocator, MemoryAlloc,
227 PoolAllocator, SuballocationCreateInfo, SuballocationCreationError, Suballocator,
228 },
229 };
230 use super::{
231 DedicatedAllocation, DeviceAlignment, DeviceMemory, ExternalMemoryHandleTypes,
232 MemoryAllocateFlags, MemoryAllocateInfo, MemoryProperties, MemoryPropertyFlags,
233 MemoryRequirements, MemoryType,
234 };
235 use crate::{
236 device::{Device, DeviceOwned},
237 DeviceSize, RequirementNotMet, RequiresOneOf, Version, VulkanError,
238 };
239 use ash::vk::{MAX_MEMORY_HEAPS, MAX_MEMORY_TYPES};
240 use parking_lot::RwLock;
241 use std::{
242 error::Error,
243 fmt::{Display, Error as FmtError, Formatter},
244 sync::Arc,
245 };
246
247 const B: DeviceSize = 1;
248 const K: DeviceSize = 1024 * B;
249 const M: DeviceSize = 1024 * K;
250 const G: DeviceSize = 1024 * M;
251
252 /// General-purpose memory allocators which allocate from any memory type dynamically as needed.
253 pub unsafe trait MemoryAllocator: DeviceOwned {
254 /// Finds the most suitable memory type index in `memory_type_bits` using a filter. Returns
255 /// [`None`] if the requirements are too strict and no memory type is able to satisfy them.
find_memory_type_index( &self, memory_type_bits: u32, filter: MemoryTypeFilter, ) -> Option<u32>256 fn find_memory_type_index(
257 &self,
258 memory_type_bits: u32,
259 filter: MemoryTypeFilter,
260 ) -> Option<u32>;
261
262 /// Allocates memory from a specific memory type.
allocate_from_type( &self, memory_type_index: u32, create_info: SuballocationCreateInfo, ) -> Result<MemoryAlloc, AllocationCreationError>263 fn allocate_from_type(
264 &self,
265 memory_type_index: u32,
266 create_info: SuballocationCreateInfo,
267 ) -> Result<MemoryAlloc, AllocationCreationError>;
268
269 /// Allocates memory from a specific memory type without checking the parameters.
270 ///
271 /// # Safety
272 ///
273 /// - If `memory_type_index` refers to a memory type with the [`protected`] flag set, then the
274 /// [`protected_memory`] feature must be enabled on the device.
275 /// - If `memory_type_index` refers to a memory type with the [`device_coherent`] flag set,
276 /// then the [`device_coherent_memory`] feature must be enabled on the device.
277 /// - `create_info.layout.size()` must not exceed the size of the heap that the memory type
278 /// corresponding to `memory_type_index` resides in.
279 ///
280 /// [`protected`]: MemoryPropertyFlags::protected
281 /// [`protected_memory`]: crate::device::Features::protected_memory
282 /// [`device_coherent`]: MemoryPropertyFlags::device_coherent
283 /// [`device_coherent_memory`]: crate::device::Features::device_coherent_memory
284 #[cfg_attr(not(feature = "document_unchecked"), doc(hidden))]
allocate_from_type_unchecked( &self, memory_type_index: u32, create_info: SuballocationCreateInfo, never_allocate: bool, ) -> Result<MemoryAlloc, AllocationCreationError>285 unsafe fn allocate_from_type_unchecked(
286 &self,
287 memory_type_index: u32,
288 create_info: SuballocationCreateInfo,
289 never_allocate: bool,
290 ) -> Result<MemoryAlloc, AllocationCreationError>;
291
292 /// Allocates memory according to requirements.
293 ///
294 /// # Arguments
295 ///
296 /// - `requirements` - Requirements of the resource you want to allocate memory for.
297 ///
298 /// If you plan to bind this memory directly to a non-sparse resource, then this must
299 /// correspond to the value returned by either [`RawBuffer::memory_requirements`] or
300 /// [`RawImage::memory_requirements`] for the respective buffer or image.
301 ///
302 /// [`memory_type_bits`] must be below 2<sup>*n*</sup> where *n* is the number of available
303 /// memory types.
304 ///
305 /// The default is a layout with size [`DeviceLayout::MAX_SIZE`] and alignment
306 /// [`DeviceAlignment::MIN`] and the rest all zeroes, which must be overridden.
307 ///
308 /// - `allocation_type` - What type of resource this allocation will be used for.
309 ///
310 /// This should be [`Linear`] for buffers and linear images, and [`NonLinear`] for optimal
311 /// images. You can not bind memory allocated with the [`Linear`] type to optimal images or
312 /// bind memory allocated with the [`NonLinear`] type to buffers and linear images. You
313 /// should never use the [`Unknown`] type unless you have to, as that can be less memory
314 /// efficient.
315 ///
316 /// - `dedicated_allocation` - Allows a dedicated allocation to be created.
317 ///
318 /// You should always fill this field in if you are allocating memory for a non-sparse
319 /// resource, otherwise the allocator won't be able to create a dedicated allocation if one
320 /// is recommended.
321 ///
322 /// This option is silently ignored (treated as `None`) if the device API version is below
323 /// 1.1 and the [`khr_dedicated_allocation`] extension is not enabled on the device.
324 ///
325 /// [`alignment`]: MemoryRequirements::alignment
326 /// [`memory_type_bits`]: MemoryRequirements::memory_type_bits
327 /// [`RawBuffer::memory_requirements`]: crate::buffer::sys::RawBuffer::memory_requirements
328 /// [`RawImage::memory_requirements`]: crate::image::sys::RawImage::memory_requirements
329 /// [`Linear`]: AllocationType::Linear
330 /// [`NonLinear`]: AllocationType::NonLinear
331 /// [`Unknown`]: AllocationType::Unknown
332 /// [`khr_dedicated_allocation`]: crate::device::DeviceExtensions::khr_dedicated_allocation
allocate( &self, requirements: MemoryRequirements, allocation_type: AllocationType, create_info: AllocationCreateInfo, dedicated_allocation: Option<DedicatedAllocation<'_>>, ) -> Result<MemoryAlloc, AllocationCreationError>333 fn allocate(
334 &self,
335 requirements: MemoryRequirements,
336 allocation_type: AllocationType,
337 create_info: AllocationCreateInfo,
338 dedicated_allocation: Option<DedicatedAllocation<'_>>,
339 ) -> Result<MemoryAlloc, AllocationCreationError>;
340
341 /// Allocates memory according to requirements without checking the parameters.
342 ///
343 /// # Safety
344 ///
345 /// - If `create_info.dedicated_allocation` is `Some` then `create_info.requirements.size` must
346 /// match the memory requirements of the resource.
347 /// - If `create_info.dedicated_allocation` is `Some` then the device the resource was created
348 /// with must match the device the allocator was created with.
349 #[cfg_attr(not(feature = "document_unchecked"), doc(hidden))]
allocate_unchecked( &self, requirements: MemoryRequirements, allocation_type: AllocationType, create_info: AllocationCreateInfo, dedicated_allocation: Option<DedicatedAllocation<'_>>, ) -> Result<MemoryAlloc, AllocationCreationError>350 unsafe fn allocate_unchecked(
351 &self,
352 requirements: MemoryRequirements,
353 allocation_type: AllocationType,
354 create_info: AllocationCreateInfo,
355 dedicated_allocation: Option<DedicatedAllocation<'_>>,
356 ) -> Result<MemoryAlloc, AllocationCreationError>;
357
358 /// Creates a root allocation/dedicated allocation without checking the parameters.
359 ///
360 /// # Safety
361 ///
362 /// - `allocation_size` must not exceed the size of the heap that the memory type corresponding
363 /// to `memory_type_index` resides in.
364 /// - The handle types in `export_handle_types` must be supported and compatible, as reported by
365 /// [`ExternalBufferProperties`] or [`ImageFormatProperties`].
366 /// - If any of the handle types in `export_handle_types` require a dedicated allocation, as
367 /// reported by [`ExternalBufferProperties::external_memory_properties`] or
368 /// [`ImageFormatProperties::external_memory_properties`], then `dedicated_allocation` must
369 /// not be `None`.
370 ///
371 /// [`ExternalBufferProperties`]: crate::buffer::ExternalBufferProperties
372 /// [`ImageFormatProperties`]: crate::image::ImageFormatProperties
373 /// [`ExternalBufferProperties::external_memory_properties`]: crate::buffer::ExternalBufferProperties
374 /// [`ImageFormatProperties::external_memory_properties`]: crate::image::ImageFormatProperties::external_memory_properties
375 #[cfg_attr(not(feature = "document_unchecked"), doc(hidden))]
allocate_dedicated_unchecked( &self, memory_type_index: u32, allocation_size: DeviceSize, dedicated_allocation: Option<DedicatedAllocation<'_>>, export_handle_types: ExternalMemoryHandleTypes, ) -> Result<MemoryAlloc, AllocationCreationError>376 unsafe fn allocate_dedicated_unchecked(
377 &self,
378 memory_type_index: u32,
379 allocation_size: DeviceSize,
380 dedicated_allocation: Option<DedicatedAllocation<'_>>,
381 export_handle_types: ExternalMemoryHandleTypes,
382 ) -> Result<MemoryAlloc, AllocationCreationError>;
383 }
384
385 /// Describes what memory property flags are required, preferred and not preferred when picking a
386 /// memory type index.
387 #[derive(Clone, Copy, Debug, Default, PartialEq, Eq)]
388 pub struct MemoryTypeFilter {
389 pub required_flags: MemoryPropertyFlags,
390 pub preferred_flags: MemoryPropertyFlags,
391 pub not_preferred_flags: MemoryPropertyFlags,
392 }
393
394 impl From<MemoryUsage> for MemoryTypeFilter {
395 #[inline]
from(usage: MemoryUsage) -> Self396 fn from(usage: MemoryUsage) -> Self {
397 let mut filter = Self::default();
398
399 match usage {
400 MemoryUsage::DeviceOnly => {
401 filter.preferred_flags |= MemoryPropertyFlags::DEVICE_LOCAL;
402 filter.not_preferred_flags |= MemoryPropertyFlags::HOST_VISIBLE;
403 }
404 MemoryUsage::Upload => {
405 filter.required_flags |= MemoryPropertyFlags::HOST_VISIBLE;
406 filter.preferred_flags |= MemoryPropertyFlags::DEVICE_LOCAL;
407 filter.not_preferred_flags |= MemoryPropertyFlags::HOST_CACHED;
408 }
409 MemoryUsage::Download => {
410 filter.required_flags |= MemoryPropertyFlags::HOST_VISIBLE;
411 filter.preferred_flags |= MemoryPropertyFlags::HOST_CACHED;
412 }
413 }
414
415 filter
416 }
417 }
418
419 /// Parameters to create a new [allocation] using a [memory allocator].
420 ///
421 /// [allocation]: MemoryAlloc
422 /// [memory allocator]: MemoryAllocator
423 #[derive(Clone, Debug)]
424 pub struct AllocationCreateInfo {
425 /// The intended usage for the allocation.
426 ///
427 /// The default value is [`MemoryUsage::DeviceOnly`].
428 pub usage: MemoryUsage,
429
430 /// How eager the allocator should be to allocate [`DeviceMemory`].
431 ///
432 /// The default value is [`MemoryAllocatePreference::Unknown`].
433 pub allocate_preference: MemoryAllocatePreference,
434
435 pub _ne: crate::NonExhaustive,
436 }
437
438 impl Default for AllocationCreateInfo {
439 #[inline]
default() -> Self440 fn default() -> Self {
441 AllocationCreateInfo {
442 usage: MemoryUsage::DeviceOnly,
443 allocate_preference: MemoryAllocatePreference::Unknown,
444 _ne: crate::NonExhaustive(()),
445 }
446 }
447 }
448
449 /// Describes how a memory allocation is going to be used.
450 ///
451 /// This is mostly an optimization, except for `MemoryUsage::DeviceOnly` which will pick a memory
452 /// type that is not host-accessible if such a type exists.
453 #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
454 #[non_exhaustive]
455 pub enum MemoryUsage {
456 /// The memory is intended to only be used by the device.
457 ///
458 /// Prefers picking a memory type with the [`DEVICE_LOCAL`] flag and without the
459 /// [`HOST_VISIBLE`] flag.
460 ///
461 /// This option is what you will always want to use unless the memory needs to be accessed by
462 /// the CPU, because a memory type that can only be accessed by the GPU is going to give the
463 /// best performance. Example use cases would be textures and other maps which are written to
464 /// once and then never again, or resources that are only written and read by the GPU, like
465 /// render targets and intermediary buffers.
466 ///
467 /// [`DEVICE_LOCAL`]: MemoryPropertyFlags::DEVICE_LOCAL
468 /// [`HOST_VISIBLE`]: MemoryPropertyFlags::HOST_VISIBLE
469 DeviceOnly,
470
471 /// The memory is intended for upload to the device.
472 ///
473 /// Guarantees picking a memory type with the [`HOST_VISIBLE`] flag. Prefers picking one
474 /// without the [`HOST_CACHED`] flag and with the [`DEVICE_LOCAL`] flag.
475 ///
476 /// This option is best suited for resources that need to be constantly updated by the CPU,
477 /// like vertex and index buffers for example. It is also neccessary for *staging buffers*,
478 /// whose only purpose in life it is to get data into device-local memory or texels into an
479 /// optimal image.
480 ///
481 /// [`HOST_VISIBLE`]: MemoryPropertyFlags::HOST_VISIBLE
482 /// [`HOST_CACHED`]: MemoryPropertyFlags::HOST_CACHED
483 /// [`DEVICE_LOCAL`]: MemoryPropertyFlags::DEVICE_LOCAL
484 Upload,
485
486 /// The memory is intended for download from the device.
487 ///
488 /// Guarantees picking a memory type with the [`HOST_VISIBLE`] flag. Prefers picking one with
489 /// the [`HOST_CACHED`] flag and without the [`DEVICE_LOCAL`] flag.
490 ///
491 /// This option is best suited if you're using the device for things other than rendering and
492 /// you need to get the results back to the host. That might be compute shading, or image or
493 /// video manipulation, or screenshotting for example.
494 ///
495 /// [`HOST_VISIBLE`]: MemoryPropertyFlags::HOST_VISIBLE
496 /// [`HOST_CACHED`]: MemoryPropertyFlags::HOST_CACHED
497 /// [`DEVICE_LOCAL`]: MemoryPropertyFlags::DEVICE_LOCAL
498 Download,
499 }
500
501 /// Describes whether allocating [`DeviceMemory`] is desired.
502 #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
503 #[non_exhaustive]
504 pub enum MemoryAllocatePreference {
505 /// There is no known preference, let the allocator decide.
506 Unknown,
507
508 /// The allocator should never allocate `DeviceMemory` and should instead only suballocate from
509 /// existing blocks.
510 ///
511 /// This option is best suited if you can not afford the overhead of allocating `DeviceMemory`.
512 NeverAllocate,
513
514 /// The allocator should always allocate `DeviceMemory`.
515 ///
516 /// This option is best suited if you are allocating a long-lived resource that you know could
517 /// benefit from having a dedicated allocation.
518 AlwaysAllocate,
519 }
520
521 /// Error that can be returned when creating an [allocation] using a [memory allocator].
522 ///
523 /// [allocation]: MemoryAlloc
524 /// [memory allocator]: MemoryAllocator
525 #[derive(Clone, Debug, PartialEq, Eq)]
526 pub enum AllocationCreationError {
527 VulkanError(VulkanError),
528
529 /// There is not enough memory in the pool.
530 ///
531 /// This is returned when using [`MemoryAllocatePreference::NeverAllocate`] and there is not
532 /// enough memory in the pool.
533 OutOfPoolMemory,
534
535 /// A dedicated allocation is required but was explicitly forbidden.
536 ///
537 /// This is returned when using [`MemoryAllocatePreference::NeverAllocate`] and the
538 /// implementation requires a dedicated allocation.
539 DedicatedAllocationRequired,
540
541 /// The block size for the allocator was exceeded.
542 ///
543 /// This is returned when using [`MemoryAllocatePreference::NeverAllocate`] and the allocation
544 /// size exceeded the block size for all heaps of suitable memory types.
545 BlockSizeExceeded,
546
547 /// The block size for the suballocator was exceeded.
548 ///
549 /// This is returned when using [`GenericMemoryAllocator<Arc<PoolAllocator<BLOCK_SIZE>>>`] if
550 /// the allocation size exceeded `BLOCK_SIZE`.
551 SuballocatorBlockSizeExceeded,
552 }
553
554 impl Error for AllocationCreationError {
source(&self) -> Option<&(dyn Error + 'static)>555 fn source(&self) -> Option<&(dyn Error + 'static)> {
556 match self {
557 Self::VulkanError(err) => Some(err),
558 _ => None,
559 }
560 }
561 }
562
563 impl Display for AllocationCreationError {
fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError>564 fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
565 match self {
566 Self::VulkanError(_) => write!(f, "a runtime error occurred"),
567 Self::OutOfPoolMemory => write!(f, "the pool doesn't have enough free space"),
568 Self::DedicatedAllocationRequired => write!(
569 f,
570 "a dedicated allocation is required but was explicitly forbidden",
571 ),
572 Self::BlockSizeExceeded => write!(
573 f,
574 "the allocation size was greater than the block size for all heaps of suitable \
575 memory types and dedicated allocations were explicitly forbidden",
576 ),
577 Self::SuballocatorBlockSizeExceeded => write!(
578 f,
579 "the allocation size was greater than the suballocator's block size",
580 ),
581 }
582 }
583 }
584
585 impl From<VulkanError> for AllocationCreationError {
from(err: VulkanError) -> Self586 fn from(err: VulkanError) -> Self {
587 AllocationCreationError::VulkanError(err)
588 }
589 }
590
591 /// Standard memory allocator intended as a global and general-purpose allocator.
592 ///
593 /// This type of allocator is what you should always use, unless you know, for a fact, that it is
594 /// not suited to the task.
595 ///
596 /// See also [`GenericMemoryAllocator`] for details about the allocation algorithm, and
597 /// [`FreeListAllocator`] for details about the suballocation algorithm and example usage.
598 pub type StandardMemoryAllocator = GenericMemoryAllocator<Arc<FreeListAllocator>>;
599
600 impl StandardMemoryAllocator {
601 /// Creates a new `StandardMemoryAllocator` with default configuration.
new_default(device: Arc<Device>) -> Self602 pub fn new_default(device: Arc<Device>) -> Self {
603 #[allow(clippy::erasing_op, clippy::identity_op)]
604 let create_info = GenericMemoryAllocatorCreateInfo {
605 #[rustfmt::skip]
606 block_sizes: &[
607 (0 * B, 64 * M),
608 (1 * G, 256 * M),
609 ],
610 ..Default::default()
611 };
612
613 unsafe { Self::new_unchecked(device, create_info) }
614 }
615 }
616
617 /// A generic implementation of a [memory allocator].
618 ///
619 /// The allocator keeps a pool of [`DeviceMemory`] blocks for each memory type and uses the type
620 /// parameter `S` to [suballocate] these blocks. You can also configure the sizes of these blocks.
621 /// This means that you can have as many `GenericMemoryAllocator`s as you you want for different
622 /// needs, or for performance reasons, as long as the block sizes are configured properly so that
623 /// too much memory isn't wasted.
624 ///
625 /// See also [the `MemoryAllocator` implementation].
626 ///
627 /// # `DeviceMemory` allocation
628 ///
629 /// If an allocation is created with the [`MemoryAllocatePreference::Unknown`] option, and the
630 /// allocator deems the allocation too big for suballocation (larger than half the block size), or
631 /// the implementation prefers or requires a dedicated allocation, then that allocation is made a
632 /// dedicated allocation. Using [`MemoryAllocatePreference::NeverAllocate`], a dedicated allocation
633 /// is never created, even if the allocation is larger than the block size or a dedicated
634 /// allocation is required. In such a case an error is returned instead. Using
635 /// [`MemoryAllocatePreference::AlwaysAllocate`], a dedicated allocation is always created.
636 ///
637 /// In all other cases, `DeviceMemory` is only allocated if a pool runs out of memory and needs
638 /// another block. No `DeviceMemory` is allocated when the allocator is created, the blocks are
639 /// only allocated once they are needed.
640 ///
641 /// # Locking behavior
642 ///
643 /// The allocator never needs to lock while suballocating unless `S` needs to lock. The only time
644 /// when a pool must be locked is when a new `DeviceMemory` block is allocated for the pool. This
645 /// means that the allocator is suited to both locking and lock-free (sub)allocation algorithms.
646 ///
647 /// [memory allocator]: MemoryAllocator
648 /// [suballocate]: Suballocator
649 /// [the `MemoryAllocator` implementation]: Self#impl-MemoryAllocator-for-GenericMemoryAllocator<S>
650 #[derive(Debug)]
651 pub struct GenericMemoryAllocator<S: Suballocator> {
652 device: Arc<Device>,
653 // Each memory type has a pool of `DeviceMemory` blocks.
654 pools: ArrayVec<Pool<S>, MAX_MEMORY_TYPES>,
655 // Each memory heap has its own block size.
656 block_sizes: ArrayVec<DeviceSize, MAX_MEMORY_HEAPS>,
657 allocation_type: AllocationType,
658 dedicated_allocation: bool,
659 export_handle_types: ArrayVec<ExternalMemoryHandleTypes, MAX_MEMORY_TYPES>,
660 flags: MemoryAllocateFlags,
661 // Global mask of memory types.
662 memory_type_bits: u32,
663 // How many `DeviceMemory` allocations should be allowed before restricting them.
664 max_allocations: u32,
665 }
666
667 #[derive(Debug)]
668 struct Pool<S> {
669 blocks: RwLock<Vec<S>>,
670 // This is cached here for faster access, so we don't need to hop through 3 pointers.
671 memory_type: ash::vk::MemoryType,
672 }
673
674 impl<S: Suballocator> GenericMemoryAllocator<S> {
675 // This is a false-positive, we only use this const for static initialization.
676 #[allow(clippy::declare_interior_mutable_const)]
677 const EMPTY_POOL: Pool<S> = Pool {
678 blocks: RwLock::new(Vec::new()),
679 memory_type: ash::vk::MemoryType {
680 property_flags: ash::vk::MemoryPropertyFlags::empty(),
681 heap_index: 0,
682 },
683 };
684
685 /// Creates a new `GenericMemoryAllocator<S>` using the provided suballocator `S` for
686 /// suballocation of [`DeviceMemory`] blocks.
687 ///
688 /// # Panics
689 ///
690 /// - Panics if `create_info.block_sizes` is not sorted by threshold.
691 /// - Panics if `create_info.block_sizes` contains duplicate thresholds.
692 /// - Panics if `create_info.block_sizes` does not contain a baseline threshold of `0`.
693 /// - Panics if the block size for a heap exceeds the size of the heap.
new( device: Arc<Device>, create_info: GenericMemoryAllocatorCreateInfo<'_, '_>, ) -> Result<Self, GenericMemoryAllocatorCreationError>694 pub fn new(
695 device: Arc<Device>,
696 create_info: GenericMemoryAllocatorCreateInfo<'_, '_>,
697 ) -> Result<Self, GenericMemoryAllocatorCreationError> {
698 Self::validate_new(&device, &create_info)?;
699
700 Ok(unsafe { Self::new_unchecked(device, create_info) })
701 }
702
validate_new( device: &Device, create_info: &GenericMemoryAllocatorCreateInfo<'_, '_>, ) -> Result<(), GenericMemoryAllocatorCreationError>703 fn validate_new(
704 device: &Device,
705 create_info: &GenericMemoryAllocatorCreateInfo<'_, '_>,
706 ) -> Result<(), GenericMemoryAllocatorCreationError> {
707 let &GenericMemoryAllocatorCreateInfo {
708 block_sizes,
709 allocation_type: _,
710 dedicated_allocation: _,
711 export_handle_types,
712 device_address: _,
713 _ne: _,
714 } = create_info;
715
716 assert!(
717 block_sizes.windows(2).all(|win| win[0].0 < win[1].0),
718 "`create_info.block_sizes` must be sorted by threshold without duplicates",
719 );
720 assert!(
721 matches!(block_sizes.first(), Some((0, _))),
722 "`create_info.block_sizes` must contain a baseline threshold `0`",
723 );
724
725 if !export_handle_types.is_empty() {
726 if !(device.api_version() >= Version::V1_1
727 && device.enabled_extensions().khr_external_memory)
728 {
729 return Err(GenericMemoryAllocatorCreationError::RequirementNotMet {
730 required_for: "`create_info.export_handle_types` is not empty",
731 requires_one_of: RequiresOneOf {
732 api_version: Some(Version::V1_1),
733 device_extensions: &["khr_external_memory"],
734 ..Default::default()
735 },
736 });
737 }
738
739 assert!(
740 export_handle_types.len()
741 == device
742 .physical_device()
743 .memory_properties()
744 .memory_types
745 .len(),
746 "`create_info.export_handle_types` must contain as many elements as the number of \
747 memory types if not empty",
748 );
749
750 for export_handle_types in export_handle_types {
751 // VUID-VkExportMemoryAllocateInfo-handleTypes-parameter
752 export_handle_types.validate_device(device)?;
753 }
754 }
755
756 Ok(())
757 }
758
759 #[cfg_attr(not(feature = "document_unchecked"), doc(hidden))]
new_unchecked( device: Arc<Device>, create_info: GenericMemoryAllocatorCreateInfo<'_, '_>, ) -> Self760 pub unsafe fn new_unchecked(
761 device: Arc<Device>,
762 create_info: GenericMemoryAllocatorCreateInfo<'_, '_>,
763 ) -> Self {
764 let GenericMemoryAllocatorCreateInfo {
765 block_sizes,
766 allocation_type,
767 dedicated_allocation,
768 export_handle_types,
769 mut device_address,
770 _ne: _,
771 } = create_info;
772
773 let MemoryProperties {
774 memory_types,
775 memory_heaps,
776 } = device.physical_device().memory_properties();
777
778 let mut pools = ArrayVec::new(memory_types.len(), [Self::EMPTY_POOL; MAX_MEMORY_TYPES]);
779 for (i, memory_type) in memory_types.iter().enumerate() {
780 pools[i].memory_type = ash::vk::MemoryType {
781 property_flags: memory_type.property_flags.into(),
782 heap_index: memory_type.heap_index,
783 };
784 }
785
786 let block_sizes = {
787 let mut sizes = ArrayVec::new(memory_heaps.len(), [0; MAX_MEMORY_HEAPS]);
788
789 for (i, memory_heap) in memory_heaps.iter().enumerate() {
790 let idx = match block_sizes.binary_search_by_key(&memory_heap.size, |&(t, _)| t) {
791 Ok(idx) => idx,
792 Err(idx) => idx.saturating_sub(1),
793 };
794 sizes[i] = block_sizes[idx].1;
795
796 // VUID-vkAllocateMemory-pAllocateInfo-01713
797 assert!(sizes[i] <= memory_heap.size);
798 }
799
800 sizes
801 };
802
803 let export_handle_types = {
804 let mut types = ArrayVec::new(
805 export_handle_types.len(),
806 [ExternalMemoryHandleTypes::empty(); MAX_MEMORY_TYPES],
807 );
808 types.copy_from_slice(export_handle_types);
809
810 types
811 };
812
813 // VUID-VkMemoryAllocateInfo-flags-03331
814 device_address &= device.enabled_features().buffer_device_address
815 && !device.enabled_extensions().ext_buffer_device_address;
816 // Providers of `VkMemoryAllocateFlags`
817 device_address &=
818 device.api_version() >= Version::V1_1 || device.enabled_extensions().khr_device_group;
819
820 let mut memory_type_bits = u32::MAX;
821 for (index, MemoryType { property_flags, .. }) in memory_types.iter().enumerate() {
822 if property_flags.intersects(
823 MemoryPropertyFlags::LAZILY_ALLOCATED
824 | MemoryPropertyFlags::PROTECTED
825 | MemoryPropertyFlags::DEVICE_COHERENT
826 | MemoryPropertyFlags::DEVICE_UNCACHED
827 | MemoryPropertyFlags::RDMA_CAPABLE,
828 ) {
829 // VUID-VkMemoryAllocateInfo-memoryTypeIndex-01872
830 // VUID-vkAllocateMemory-deviceCoherentMemory-02790
831 // Lazily allocated memory would just cause problems for suballocation in general.
832 memory_type_bits &= !(1 << index);
833 }
834 }
835
836 let flags = if device_address {
837 MemoryAllocateFlags::DEVICE_ADDRESS
838 } else {
839 MemoryAllocateFlags::empty()
840 };
841
842 let max_memory_allocation_count = device
843 .physical_device()
844 .properties()
845 .max_memory_allocation_count;
846 let max_allocations = max_memory_allocation_count / 4 * 3;
847
848 GenericMemoryAllocator {
849 device,
850 pools,
851 block_sizes,
852 allocation_type,
853 dedicated_allocation,
854 export_handle_types,
855 flags,
856 memory_type_bits,
857 max_allocations,
858 }
859 }
860
validate_allocate_from_type(&self, memory_type_index: u32)861 fn validate_allocate_from_type(&self, memory_type_index: u32) {
862 let memory_type = &self.pools[usize::try_from(memory_type_index).unwrap()].memory_type;
863
864 // VUID-VkMemoryAllocateInfo-memoryTypeIndex-01872
865 assert!(
866 memory_type
867 .property_flags
868 .contains(ash::vk::MemoryPropertyFlags::PROTECTED)
869 && !self.device.enabled_features().protected_memory,
870 "attempted to allocate from a protected memory type without the `protected_memory` \
871 feature being enabled on the device",
872 );
873
874 // VUID-vkAllocateMemory-deviceCoherentMemory-02790
875 assert!(
876 memory_type
877 .property_flags
878 .contains(ash::vk::MemoryPropertyFlags::DEVICE_COHERENT_AMD)
879 && !self.device.enabled_features().device_coherent_memory,
880 "attempted to allocate memory from a device-coherent memory type without the \
881 `device_coherent_memory` feature being enabled on the device",
882 );
883 }
884
validate_allocate( &self, requirements: MemoryRequirements, dedicated_allocation: Option<DedicatedAllocation<'_>>, )885 fn validate_allocate(
886 &self,
887 requirements: MemoryRequirements,
888 dedicated_allocation: Option<DedicatedAllocation<'_>>,
889 ) {
890 assert!(requirements.memory_type_bits != 0);
891 assert!(requirements.memory_type_bits < 1 << self.pools.len());
892
893 if let Some(dedicated_allocation) = dedicated_allocation {
894 match dedicated_allocation {
895 DedicatedAllocation::Buffer(buffer) => {
896 // VUID-VkMemoryDedicatedAllocateInfo-commonparent
897 assert_eq!(&self.device, buffer.device());
898
899 let required_size = buffer.memory_requirements().layout.size();
900
901 // VUID-VkMemoryDedicatedAllocateInfo-buffer-02965
902 assert!(requirements.layout.size() != required_size);
903 }
904 DedicatedAllocation::Image(image) => {
905 // VUID-VkMemoryDedicatedAllocateInfo-commonparent
906 assert_eq!(&self.device, image.device());
907
908 let required_size = image.memory_requirements()[0].layout.size();
909
910 // VUID-VkMemoryDedicatedAllocateInfo-image-02964
911 assert!(requirements.layout.size() != required_size);
912 }
913 }
914 }
915
916 // VUID-VkMemoryAllocateInfo-pNext-00639
917 // VUID-VkExportMemoryAllocateInfo-handleTypes-00656
918 // Can't validate, must be ensured by user
919 }
920 }
921
922 unsafe impl<S: Suballocator> MemoryAllocator for GenericMemoryAllocator<S> {
find_memory_type_index( &self, memory_type_bits: u32, filter: MemoryTypeFilter, ) -> Option<u32>923 fn find_memory_type_index(
924 &self,
925 memory_type_bits: u32,
926 filter: MemoryTypeFilter,
927 ) -> Option<u32> {
928 let required_flags = filter.required_flags.into();
929 let preferred_flags = filter.preferred_flags.into();
930 let not_preferred_flags = filter.not_preferred_flags.into();
931
932 self.pools
933 .iter()
934 .map(|pool| pool.memory_type.property_flags)
935 .enumerate()
936 // Filter out memory types which are supported by the memory type bits and have the
937 // required flags set.
938 .filter(|&(index, flags)| {
939 memory_type_bits & (1 << index) != 0 && flags & required_flags == required_flags
940 })
941 // Rank memory types with more of the preferred flags higher, and ones with more of the
942 // not preferred flags lower.
943 .min_by_key(|&(_, flags)| {
944 (!flags & preferred_flags).as_raw().count_ones()
945 + (flags & not_preferred_flags).as_raw().count_ones()
946 })
947 .map(|(index, _)| index as u32)
948 }
949
950 /// Allocates memory from a specific memory type.
951 ///
952 /// # Panics
953 ///
954 /// - Panics if `memory_type_index` is not less than the number of available memory types.
955 /// - Panics if `memory_type_index` refers to a memory type which has the [`PROTECTED`] flag
956 /// set and the [`protected_memory`] feature is not enabled on the device.
957 /// - Panics if `memory_type_index` refers to a memory type which has the [`DEVICE_COHERENT`]
958 /// flag set and the [`device_coherent_memory`] feature is not enabled on the device.
959 ///
960 /// # Errors
961 ///
962 /// - Returns an error if allocating a new block is required and failed. This can be one of the
963 /// OOM errors or [`TooManyObjects`].
964 /// - Returns [`BlockSizeExceeded`] if `create_info.layout.size()` is greater than the block
965 /// size corresponding to the heap that the memory type corresponding to `memory_type_index`
966 /// resides in.
967 /// - Returns [`SuballocatorBlockSizeExceeded`] if `S` is `PoolAllocator<BLOCK_SIZE>` and
968 /// `create_info.layout.size()` is greater than `BLOCK_SIZE`.
969 ///
970 /// [`PROTECTED`]: MemoryPropertyFlags::PROTECTED
971 /// [`protected_memory`]: crate::device::Features::protected_memory
972 /// [`DEVICE_COHERENT`]: MemoryPropertyFlags::DEVICE_COHERENT
973 /// [`device_coherent_memory`]: crate::device::Features::device_coherent_memory
974 /// [`TooManyObjects`]: VulkanError::TooManyObjects
975 /// [`BlockSizeExceeded`]: AllocationCreationError::BlockSizeExceeded
976 /// [`SuballocatorBlockSizeExceeded`]: AllocationCreationError::SuballocatorBlockSizeExceeded
allocate_from_type( &self, memory_type_index: u32, create_info: SuballocationCreateInfo, ) -> Result<MemoryAlloc, AllocationCreationError>977 fn allocate_from_type(
978 &self,
979 memory_type_index: u32,
980 create_info: SuballocationCreateInfo,
981 ) -> Result<MemoryAlloc, AllocationCreationError> {
982 self.validate_allocate_from_type(memory_type_index);
983
984 if self.pools[memory_type_index as usize]
985 .memory_type
986 .property_flags
987 .contains(ash::vk::MemoryPropertyFlags::LAZILY_ALLOCATED)
988 {
989 return unsafe {
990 self.allocate_dedicated_unchecked(
991 memory_type_index,
992 create_info.layout.size(),
993 None,
994 if !self.export_handle_types.is_empty() {
995 self.export_handle_types[memory_type_index as usize]
996 } else {
997 ExternalMemoryHandleTypes::empty()
998 },
999 )
1000 };
1001 }
1002
1003 unsafe { self.allocate_from_type_unchecked(memory_type_index, create_info, false) }
1004 }
1005
allocate_from_type_unchecked( &self, memory_type_index: u32, create_info: SuballocationCreateInfo, never_allocate: bool, ) -> Result<MemoryAlloc, AllocationCreationError>1006 unsafe fn allocate_from_type_unchecked(
1007 &self,
1008 memory_type_index: u32,
1009 create_info: SuballocationCreateInfo,
1010 never_allocate: bool,
1011 ) -> Result<MemoryAlloc, AllocationCreationError> {
1012 let SuballocationCreateInfo {
1013 layout,
1014 allocation_type: _,
1015 _ne: _,
1016 } = create_info;
1017
1018 let size = layout.size();
1019 let pool = &self.pools[memory_type_index as usize];
1020 let block_size = self.block_sizes[pool.memory_type.heap_index as usize];
1021
1022 if size > block_size {
1023 return Err(AllocationCreationError::BlockSizeExceeded);
1024 }
1025
1026 let mut blocks = if S::IS_BLOCKING {
1027 // If the allocation algorithm needs to block, then there's no point in trying to avoid
1028 // locks here either. In that case the best strategy is to take full advantage of it by
1029 // always taking an exclusive lock, which lets us sort the blocks by free size. If you
1030 // as a user want to avoid locks, simply don't share the allocator between threads. You
1031 // can create as many allocators as you wish, but keep in mind that that will waste a
1032 // huge amount of memory unless you configure your block sizes properly!
1033
1034 let mut blocks = pool.blocks.write();
1035 blocks.sort_by_key(Suballocator::free_size);
1036 let (Ok(idx) | Err(idx)) = blocks.binary_search_by_key(&size, Suballocator::free_size);
1037 for block in &blocks[idx..] {
1038 match block.allocate(create_info.clone()) {
1039 Ok(allocation) => return Ok(allocation),
1040 Err(SuballocationCreationError::BlockSizeExceeded) => {
1041 return Err(AllocationCreationError::SuballocatorBlockSizeExceeded);
1042 }
1043 Err(_) => {}
1044 }
1045 }
1046
1047 blocks
1048 } else {
1049 // If the allocation algorithm is lock-free, then we should avoid taking an exclusive
1050 // lock unless it is absolutely neccessary (meaning, only when allocating a new
1051 // `DeviceMemory` block and inserting it into a pool). This has the disadvantage that
1052 // traversing the pool is O(n), which is not a problem since the number of blocks is
1053 // expected to be small. If there are more than 10 blocks in a pool then that's a
1054 // configuration error. Also, sorting the blocks before each allocation would be less
1055 // efficient because to get the free size of the `PoolAllocator` and `BumpAllocator`
1056 // has the same performance as trying to allocate.
1057
1058 let blocks = pool.blocks.read();
1059 // Search in reverse order because we always append new blocks at the end.
1060 for block in blocks.iter().rev() {
1061 match block.allocate(create_info.clone()) {
1062 Ok(allocation) => return Ok(allocation),
1063 // This can happen when using the `PoolAllocator<BLOCK_SIZE>` if the allocation
1064 // size is greater than `BLOCK_SIZE`.
1065 Err(SuballocationCreationError::BlockSizeExceeded) => {
1066 return Err(AllocationCreationError::SuballocatorBlockSizeExceeded);
1067 }
1068 Err(_) => {}
1069 }
1070 }
1071
1072 let len = blocks.len();
1073 drop(blocks);
1074 let blocks = pool.blocks.write();
1075 if blocks.len() > len {
1076 // Another thread beat us to it and inserted a fresh block, try to allocate from it.
1077 match blocks[len].allocate(create_info.clone()) {
1078 Ok(allocation) => return Ok(allocation),
1079 // This can happen if this is the first block that was inserted and when using
1080 // the `PoolAllocator<BLOCK_SIZE>` if the allocation size is greater than
1081 // `BLOCK_SIZE`.
1082 Err(SuballocationCreationError::BlockSizeExceeded) => {
1083 return Err(AllocationCreationError::SuballocatorBlockSizeExceeded);
1084 }
1085 Err(_) => {}
1086 }
1087 }
1088
1089 blocks
1090 };
1091
1092 // For bump allocators, first do a garbage sweep and try to allocate again.
1093 if S::NEEDS_CLEANUP {
1094 blocks.iter_mut().for_each(Suballocator::cleanup);
1095 blocks.sort_unstable_by_key(Suballocator::free_size);
1096
1097 if let Some(block) = blocks.last() {
1098 if let Ok(allocation) = block.allocate(create_info.clone()) {
1099 return Ok(allocation);
1100 }
1101 }
1102 }
1103
1104 if never_allocate {
1105 return Err(AllocationCreationError::OutOfPoolMemory);
1106 }
1107
1108 // The pool doesn't have enough real estate, so we need a new block.
1109 let block = {
1110 let export_handle_types = if !self.export_handle_types.is_empty() {
1111 self.export_handle_types[memory_type_index as usize]
1112 } else {
1113 ExternalMemoryHandleTypes::empty()
1114 };
1115 let mut i = 0;
1116
1117 loop {
1118 let allocate_info = MemoryAllocateInfo {
1119 allocation_size: block_size >> i,
1120 memory_type_index,
1121 export_handle_types,
1122 dedicated_allocation: None,
1123 flags: self.flags,
1124 ..Default::default()
1125 };
1126 match DeviceMemory::allocate_unchecked(self.device.clone(), allocate_info, None) {
1127 Ok(device_memory) => {
1128 break S::new(MemoryAlloc::new(device_memory)?);
1129 }
1130 // Retry up to 3 times, halving the allocation size each time.
1131 Err(VulkanError::OutOfHostMemory | VulkanError::OutOfDeviceMemory) if i < 3 => {
1132 i += 1;
1133 }
1134 Err(err) => return Err(err.into()),
1135 }
1136 }
1137 };
1138
1139 blocks.push(block);
1140 let block = blocks.last().unwrap();
1141
1142 match block.allocate(create_info) {
1143 Ok(allocation) => Ok(allocation),
1144 // This can happen if the block ended up smaller than advertised because there wasn't
1145 // enough memory.
1146 Err(SuballocationCreationError::OutOfRegionMemory) => Err(
1147 AllocationCreationError::VulkanError(VulkanError::OutOfDeviceMemory),
1148 ),
1149 // This can not happen as the block is fresher than Febreze and we're still holding an
1150 // exclusive lock.
1151 Err(SuballocationCreationError::FragmentedRegion) => unreachable!(),
1152 // This can happen if this is the first block that was inserted and when using the
1153 // `PoolAllocator<BLOCK_SIZE>` if the allocation size is greater than `BLOCK_SIZE`.
1154 Err(SuballocationCreationError::BlockSizeExceeded) => {
1155 Err(AllocationCreationError::SuballocatorBlockSizeExceeded)
1156 }
1157 }
1158 }
1159
1160 /// Allocates memory according to requirements.
1161 ///
1162 /// # Panics
1163 ///
1164 /// - Panics if `create_info.requirements.memory_type_bits` is zero.
1165 /// - Panics if `create_info.requirements.memory_type_bits` is not less than 2<sup>*n*</sup>
1166 /// where *n* is the number of available memory types.
1167 /// - Panics if `create_info.dedicated_allocation` is `Some` and
1168 /// `create_info.requirements.size` doesn't match the memory requirements of the resource.
1169 /// - Panics if finding a suitable memory type failed. This only happens if the
1170 /// `create_info.requirements` correspond to those of an optimal image but
1171 /// `create_info.usage` is not [`MemoryUsage::DeviceOnly`].
1172 ///
1173 /// # Errors
1174 ///
1175 /// - Returns an error if allocating a new block is required and failed. This can be one of the
1176 /// OOM errors or [`TooManyObjects`].
1177 /// - Returns [`OutOfPoolMemory`] if `create_info.allocate_preference` is
1178 /// [`MemoryAllocatePreference::NeverAllocate`] and none of the pools of suitable memory
1179 /// types have enough free space.
1180 /// - Returns [`DedicatedAllocationRequired`] if `create_info.allocate_preference` is
1181 /// [`MemoryAllocatePreference::NeverAllocate`] and
1182 /// `create_info.requirements.requires_dedicated_allocation` is `true`.
1183 /// - Returns [`BlockSizeExceeded`] if `create_info.allocate_preference` is
1184 /// [`MemoryAllocatePreference::NeverAllocate`] and `create_info.requirements.size` is greater
1185 /// than the block size for all heaps of suitable memory types.
1186 /// - Returns [`SuballocatorBlockSizeExceeded`] if `S` is `PoolAllocator<BLOCK_SIZE>` and
1187 /// `create_info.size` is greater than `BLOCK_SIZE` and a dedicated allocation was not
1188 /// created.
1189 ///
1190 /// [`TooManyObjects`]: VulkanError::TooManyObjects
1191 /// [`OutOfPoolMemory`]: AllocationCreationError::OutOfPoolMemory
1192 /// [`DedicatedAllocationRequired`]: AllocationCreationError::DedicatedAllocationRequired
1193 /// [`BlockSizeExceeded`]: AllocationCreationError::BlockSizeExceeded
1194 /// [`SuballocatorBlockSizeExceeded`]: AllocationCreationError::SuballocatorBlockSizeExceeded
allocate( &self, requirements: MemoryRequirements, allocation_type: AllocationType, create_info: AllocationCreateInfo, dedicated_allocation: Option<DedicatedAllocation<'_>>, ) -> Result<MemoryAlloc, AllocationCreationError>1195 fn allocate(
1196 &self,
1197 requirements: MemoryRequirements,
1198 allocation_type: AllocationType,
1199 create_info: AllocationCreateInfo,
1200 dedicated_allocation: Option<DedicatedAllocation<'_>>,
1201 ) -> Result<MemoryAlloc, AllocationCreationError> {
1202 self.validate_allocate(requirements, dedicated_allocation);
1203
1204 unsafe {
1205 self.allocate_unchecked(
1206 requirements,
1207 allocation_type,
1208 create_info,
1209 dedicated_allocation,
1210 )
1211 }
1212 }
1213
allocate_unchecked( &self, requirements: MemoryRequirements, allocation_type: AllocationType, create_info: AllocationCreateInfo, mut dedicated_allocation: Option<DedicatedAllocation<'_>>, ) -> Result<MemoryAlloc, AllocationCreationError>1214 unsafe fn allocate_unchecked(
1215 &self,
1216 requirements: MemoryRequirements,
1217 allocation_type: AllocationType,
1218 create_info: AllocationCreateInfo,
1219 mut dedicated_allocation: Option<DedicatedAllocation<'_>>,
1220 ) -> Result<MemoryAlloc, AllocationCreationError> {
1221 let MemoryRequirements {
1222 layout,
1223 mut memory_type_bits,
1224 mut prefers_dedicated_allocation,
1225 requires_dedicated_allocation,
1226 } = requirements;
1227 let AllocationCreateInfo {
1228 usage,
1229 allocate_preference,
1230 _ne: _,
1231 } = create_info;
1232
1233 let create_info = SuballocationCreateInfo {
1234 layout,
1235 allocation_type,
1236 _ne: crate::NonExhaustive(()),
1237 };
1238
1239 let size = layout.size();
1240 memory_type_bits &= self.memory_type_bits;
1241
1242 let filter = usage.into();
1243 let mut memory_type_index = self
1244 .find_memory_type_index(memory_type_bits, filter)
1245 .expect("couldn't find a suitable memory type");
1246
1247 if !self.dedicated_allocation {
1248 dedicated_allocation = None;
1249 }
1250
1251 let export_handle_types = if self.export_handle_types.is_empty() {
1252 ExternalMemoryHandleTypes::empty()
1253 } else {
1254 self.export_handle_types[memory_type_index as usize]
1255 };
1256
1257 loop {
1258 let memory_type = self.pools[memory_type_index as usize].memory_type;
1259 let block_size = self.block_sizes[memory_type.heap_index as usize];
1260
1261 let res = match allocate_preference {
1262 MemoryAllocatePreference::Unknown => {
1263 if requires_dedicated_allocation {
1264 self.allocate_dedicated_unchecked(
1265 memory_type_index,
1266 size,
1267 dedicated_allocation,
1268 export_handle_types,
1269 )
1270 } else {
1271 if size > block_size / 2 {
1272 prefers_dedicated_allocation = true;
1273 }
1274 if self.device.allocation_count() > self.max_allocations
1275 && size <= block_size
1276 {
1277 prefers_dedicated_allocation = false;
1278 }
1279
1280 if prefers_dedicated_allocation {
1281 self.allocate_dedicated_unchecked(
1282 memory_type_index,
1283 size,
1284 dedicated_allocation,
1285 export_handle_types,
1286 )
1287 // Fall back to suballocation.
1288 .or_else(|err| {
1289 if size <= block_size {
1290 self.allocate_from_type_unchecked(
1291 memory_type_index,
1292 create_info.clone(),
1293 true, // A dedicated allocation already failed.
1294 )
1295 .map_err(|_| err)
1296 } else {
1297 Err(err)
1298 }
1299 })
1300 } else {
1301 self.allocate_from_type_unchecked(
1302 memory_type_index,
1303 create_info.clone(),
1304 false,
1305 )
1306 // Fall back to dedicated allocation. It is possible that the 1/8 block
1307 // size tried was greater than the allocation size, so there's hope.
1308 .or_else(|_| {
1309 self.allocate_dedicated_unchecked(
1310 memory_type_index,
1311 size,
1312 dedicated_allocation,
1313 export_handle_types,
1314 )
1315 })
1316 }
1317 }
1318 }
1319 MemoryAllocatePreference::NeverAllocate => {
1320 if requires_dedicated_allocation {
1321 return Err(AllocationCreationError::DedicatedAllocationRequired);
1322 }
1323
1324 self.allocate_from_type_unchecked(memory_type_index, create_info.clone(), true)
1325 }
1326 MemoryAllocatePreference::AlwaysAllocate => self.allocate_dedicated_unchecked(
1327 memory_type_index,
1328 size,
1329 dedicated_allocation,
1330 export_handle_types,
1331 ),
1332 };
1333
1334 match res {
1335 Ok(allocation) => return Ok(allocation),
1336 // This is not recoverable.
1337 Err(AllocationCreationError::SuballocatorBlockSizeExceeded) => {
1338 return Err(AllocationCreationError::SuballocatorBlockSizeExceeded);
1339 }
1340 // Try a different memory type.
1341 Err(err) => {
1342 memory_type_bits &= !(1 << memory_type_index);
1343 memory_type_index = self
1344 .find_memory_type_index(memory_type_bits, filter)
1345 .ok_or(err)?;
1346 }
1347 }
1348 }
1349 }
1350
allocate_dedicated_unchecked( &self, memory_type_index: u32, allocation_size: DeviceSize, mut dedicated_allocation: Option<DedicatedAllocation<'_>>, export_handle_types: ExternalMemoryHandleTypes, ) -> Result<MemoryAlloc, AllocationCreationError>1351 unsafe fn allocate_dedicated_unchecked(
1352 &self,
1353 memory_type_index: u32,
1354 allocation_size: DeviceSize,
1355 mut dedicated_allocation: Option<DedicatedAllocation<'_>>,
1356 export_handle_types: ExternalMemoryHandleTypes,
1357 ) -> Result<MemoryAlloc, AllocationCreationError> {
1358 // Providers of `VkMemoryDedicatedAllocateInfo`
1359 if !(self.device.api_version() >= Version::V1_1
1360 || self.device.enabled_extensions().khr_dedicated_allocation)
1361 {
1362 dedicated_allocation = None;
1363 }
1364
1365 let allocate_info = MemoryAllocateInfo {
1366 allocation_size,
1367 memory_type_index,
1368 dedicated_allocation,
1369 export_handle_types,
1370 flags: self.flags,
1371 ..Default::default()
1372 };
1373 let mut allocation = MemoryAlloc::new(DeviceMemory::allocate_unchecked(
1374 self.device.clone(),
1375 allocate_info,
1376 None,
1377 )?)?;
1378 allocation.set_allocation_type(self.allocation_type);
1379
1380 Ok(allocation)
1381 }
1382 }
1383
1384 unsafe impl<S: Suballocator> MemoryAllocator for Arc<GenericMemoryAllocator<S>> {
find_memory_type_index( &self, memory_type_bits: u32, filter: MemoryTypeFilter, ) -> Option<u32>1385 fn find_memory_type_index(
1386 &self,
1387 memory_type_bits: u32,
1388 filter: MemoryTypeFilter,
1389 ) -> Option<u32> {
1390 (**self).find_memory_type_index(memory_type_bits, filter)
1391 }
1392
allocate_from_type( &self, memory_type_index: u32, create_info: SuballocationCreateInfo, ) -> Result<MemoryAlloc, AllocationCreationError>1393 fn allocate_from_type(
1394 &self,
1395 memory_type_index: u32,
1396 create_info: SuballocationCreateInfo,
1397 ) -> Result<MemoryAlloc, AllocationCreationError> {
1398 (**self).allocate_from_type(memory_type_index, create_info)
1399 }
1400
allocate_from_type_unchecked( &self, memory_type_index: u32, create_info: SuballocationCreateInfo, never_allocate: bool, ) -> Result<MemoryAlloc, AllocationCreationError>1401 unsafe fn allocate_from_type_unchecked(
1402 &self,
1403 memory_type_index: u32,
1404 create_info: SuballocationCreateInfo,
1405 never_allocate: bool,
1406 ) -> Result<MemoryAlloc, AllocationCreationError> {
1407 (**self).allocate_from_type_unchecked(memory_type_index, create_info, never_allocate)
1408 }
1409
allocate( &self, requirements: MemoryRequirements, allocation_type: AllocationType, create_info: AllocationCreateInfo, dedicated_allocation: Option<DedicatedAllocation<'_>>, ) -> Result<MemoryAlloc, AllocationCreationError>1410 fn allocate(
1411 &self,
1412 requirements: MemoryRequirements,
1413 allocation_type: AllocationType,
1414 create_info: AllocationCreateInfo,
1415 dedicated_allocation: Option<DedicatedAllocation<'_>>,
1416 ) -> Result<MemoryAlloc, AllocationCreationError> {
1417 (**self).allocate(
1418 requirements,
1419 allocation_type,
1420 create_info,
1421 dedicated_allocation,
1422 )
1423 }
1424
allocate_unchecked( &self, requirements: MemoryRequirements, allocation_type: AllocationType, create_info: AllocationCreateInfo, dedicated_allocation: Option<DedicatedAllocation<'_>>, ) -> Result<MemoryAlloc, AllocationCreationError>1425 unsafe fn allocate_unchecked(
1426 &self,
1427 requirements: MemoryRequirements,
1428 allocation_type: AllocationType,
1429 create_info: AllocationCreateInfo,
1430 dedicated_allocation: Option<DedicatedAllocation<'_>>,
1431 ) -> Result<MemoryAlloc, AllocationCreationError> {
1432 (**self).allocate_unchecked(
1433 requirements,
1434 allocation_type,
1435 create_info,
1436 dedicated_allocation,
1437 )
1438 }
1439
allocate_dedicated_unchecked( &self, memory_type_index: u32, allocation_size: DeviceSize, dedicated_allocation: Option<DedicatedAllocation<'_>>, export_handle_types: ExternalMemoryHandleTypes, ) -> Result<MemoryAlloc, AllocationCreationError>1440 unsafe fn allocate_dedicated_unchecked(
1441 &self,
1442 memory_type_index: u32,
1443 allocation_size: DeviceSize,
1444 dedicated_allocation: Option<DedicatedAllocation<'_>>,
1445 export_handle_types: ExternalMemoryHandleTypes,
1446 ) -> Result<MemoryAlloc, AllocationCreationError> {
1447 (**self).allocate_dedicated_unchecked(
1448 memory_type_index,
1449 allocation_size,
1450 dedicated_allocation,
1451 export_handle_types,
1452 )
1453 }
1454 }
1455
1456 unsafe impl<S: Suballocator> DeviceOwned for GenericMemoryAllocator<S> {
device(&self) -> &Arc<Device>1457 fn device(&self) -> &Arc<Device> {
1458 &self.device
1459 }
1460 }
1461
1462 /// Parameters to create a new [`GenericMemoryAllocator`].
1463 #[derive(Clone, Debug)]
1464 pub struct GenericMemoryAllocatorCreateInfo<'b, 'e> {
1465 /// Lets you configure the block sizes for various heap size classes.
1466 ///
1467 /// Each entry is a pair of the threshold for the heap size and the block size that should be
1468 /// used for that heap. Must be sorted by threshold and all thresholds must be unique. Must
1469 /// contain a baseline threshold of 0.
1470 ///
1471 /// The allocator keeps a pool of [`DeviceMemory`] blocks for each memory type, so each memory
1472 /// type that resides in a heap whose size crosses one of the thresholds will use the
1473 /// corresponding block size. If multiple thresholds apply to a given heap, the block size
1474 /// corresponding to the largest threshold is chosen.
1475 ///
1476 /// The block size is going to be the maximum size of a `DeviceMemory` block that is tried. If
1477 /// allocating a block with the size fails, the allocator tries 1/2, 1/4 and 1/8 of the block
1478 /// size in that order until one succeeds, else a dedicated allocation is attempted for the
1479 /// allocation. If an allocation is created with a size greater than half the block size it is
1480 /// always made a dedicated allocation. All of this doesn't apply when using
1481 /// [`MemoryAllocatePreference::NeverAllocate`] however.
1482 ///
1483 /// The default value is `&[]`, which must be overridden.
1484 pub block_sizes: &'b [(Threshold, BlockSize)],
1485
1486 /// The allocation type that should be used for root allocations.
1487 ///
1488 /// You only need to worry about this if you're using [`PoolAllocator`] as the suballocator, as
1489 /// all suballocations that the pool allocator makes inherit their allocation type from the
1490 /// parent allocation. For the [`FreeListAllocator`] and the [`BuddyAllocator`] this must be
1491 /// [`AllocationType::Unknown`] otherwise you will get panics. It does not matter what this is
1492 /// when using the [`BumpAllocator`].
1493 ///
1494 /// The default value is [`AllocationType::Unknown`].
1495 pub allocation_type: AllocationType,
1496
1497 /// Whether the allocator should use the dedicated allocation APIs.
1498 ///
1499 /// This means that when the allocator decides that an allocation should not be suballocated,
1500 /// but rather have its own block of [`DeviceMemory`], that that allocation will be made a
1501 /// dedicated allocation. Otherwise they are still made free-standing ([root]) allocations,
1502 /// just not [dedicated] ones.
1503 ///
1504 /// Dedicated allocations are an optimization which may result in better performance, so there
1505 /// really is no reason to disable this option, unless the restrictions that they bring with
1506 /// them are a problem. Namely, a dedicated allocation must only be used for the resource it
1507 /// was created for. Meaning that [reusing the memory] for something else is not possible,
1508 /// [suballocating it] is not possible, and [aliasing it] is also not possible.
1509 ///
1510 /// This option is silently ignored (treated as `false`) if the device API version is below 1.1
1511 /// and the [`khr_dedicated_allocation`] extension is not enabled on the device.
1512 ///
1513 /// The default value is `true`.
1514 ///
1515 /// [root]: MemoryAlloc::is_root
1516 /// [dedicated]: MemoryAlloc::is_dedicated
1517 /// [reusing the memory]: MemoryAlloc::try_unwrap
1518 /// [suballocating it]: Suballocator
1519 /// [aliasing it]: MemoryAlloc::alias
1520 /// [`khr_dedicated_allocation`]: crate::device::DeviceExtensions::khr_dedicated_allocation
1521 pub dedicated_allocation: bool,
1522
1523 /// Lets you configure the external memory handle types that the [`DeviceMemory`] blocks will
1524 /// be allocated with.
1525 ///
1526 /// Must be either empty or contain one element for each memory type. When `DeviceMemory` is
1527 /// allocated, the external handle types corresponding to the memory type index are looked up
1528 /// here and used for the allocation.
1529 ///
1530 /// The default value is `&[]`.
1531 pub export_handle_types: &'e [ExternalMemoryHandleTypes],
1532
1533 /// Whether the allocator should allocate the [`DeviceMemory`] blocks with the
1534 /// [`DEVICE_ADDRESS`] flag set.
1535 ///
1536 /// This is required if you want to allocate memory for buffers that have the
1537 /// [`SHADER_DEVICE_ADDRESS`] usage set. For this option too, there is no reason to disable it.
1538 ///
1539 /// This option is silently ignored (treated as `false`) if the [`buffer_device_address`]
1540 /// feature is not enabled on the device or if the [`ext_buffer_device_address`] extension is
1541 /// enabled on the device. It is also ignored if the device API version is below 1.1 and the
1542 /// [`khr_device_group`] extension is not enabled on the device.
1543 ///
1544 /// The default value is `true`.
1545 ///
1546 /// [`DEVICE_ADDRESS`]: MemoryAllocateFlags::DEVICE_ADDRESS
1547 /// [`SHADER_DEVICE_ADDRESS`]: crate::buffer::BufferUsage::SHADER_DEVICE_ADDRESS
1548 /// [`buffer_device_address`]: crate::device::Features::buffer_device_address
1549 /// [`ext_buffer_device_address`]: crate::device::DeviceExtensions::ext_buffer_device_address
1550 /// [`khr_device_group`]: crate::device::DeviceExtensions::khr_device_group
1551 pub device_address: bool,
1552
1553 pub _ne: crate::NonExhaustive,
1554 }
1555
1556 pub type Threshold = DeviceSize;
1557
1558 pub type BlockSize = DeviceSize;
1559
1560 impl Default for GenericMemoryAllocatorCreateInfo<'_, '_> {
1561 #[inline]
default() -> Self1562 fn default() -> Self {
1563 GenericMemoryAllocatorCreateInfo {
1564 block_sizes: &[],
1565 allocation_type: AllocationType::Unknown,
1566 dedicated_allocation: true,
1567 export_handle_types: &[],
1568 device_address: true,
1569 _ne: crate::NonExhaustive(()),
1570 }
1571 }
1572 }
1573
1574 /// Error that can be returned when creating a [`GenericMemoryAllocator`].
1575 #[derive(Clone, Debug, PartialEq, Eq)]
1576 pub enum GenericMemoryAllocatorCreationError {
1577 RequirementNotMet {
1578 required_for: &'static str,
1579 requires_one_of: RequiresOneOf,
1580 },
1581 }
1582
1583 impl Error for GenericMemoryAllocatorCreationError {}
1584
1585 impl Display for GenericMemoryAllocatorCreationError {
fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError>1586 fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
1587 match self {
1588 Self::RequirementNotMet {
1589 required_for,
1590 requires_one_of,
1591 } => write!(
1592 f,
1593 "a requirement was not met for: {}; requires one of: {}",
1594 required_for, requires_one_of,
1595 ),
1596 }
1597 }
1598 }
1599
1600 impl From<RequirementNotMet> for GenericMemoryAllocatorCreationError {
from(err: RequirementNotMet) -> Self1601 fn from(err: RequirementNotMet) -> Self {
1602 Self::RequirementNotMet {
1603 required_for: err.required_for,
1604 requires_one_of: err.requires_one_of,
1605 }
1606 }
1607 }
1608
1609 /// > **Note**: Returns `0` on overflow.
1610 #[inline(always)]
align_up(val: DeviceSize, alignment: DeviceAlignment) -> DeviceSize1611 pub(crate) const fn align_up(val: DeviceSize, alignment: DeviceAlignment) -> DeviceSize {
1612 align_down(val.wrapping_add(alignment.as_devicesize() - 1), alignment)
1613 }
1614
1615 #[inline(always)]
align_down(val: DeviceSize, alignment: DeviceAlignment) -> DeviceSize1616 pub(crate) const fn align_down(val: DeviceSize, alignment: DeviceAlignment) -> DeviceSize {
1617 val & !(alignment.as_devicesize() - 1)
1618 }
1619
1620 mod array_vec {
1621 use std::ops::{Deref, DerefMut};
1622
1623 /// Minimal implementation of an `ArrayVec`. Useful when a `Vec` is needed but there is a known
1624 /// limit on the number of elements, so that it can occupy real estate on the stack.
1625 #[derive(Clone, Copy, Debug)]
1626 pub(super) struct ArrayVec<T, const N: usize> {
1627 len: usize,
1628 data: [T; N],
1629 }
1630
1631 impl<T, const N: usize> ArrayVec<T, N> {
new(len: usize, data: [T; N]) -> Self1632 pub fn new(len: usize, data: [T; N]) -> Self {
1633 assert!(len <= N);
1634
1635 ArrayVec { len, data }
1636 }
1637 }
1638
1639 impl<T, const N: usize> Deref for ArrayVec<T, N> {
1640 type Target = [T];
1641
deref(&self) -> &Self::Target1642 fn deref(&self) -> &Self::Target {
1643 // SAFETY: `self.len <= N`.
1644 unsafe { self.data.get_unchecked(0..self.len) }
1645 }
1646 }
1647
1648 impl<T, const N: usize> DerefMut for ArrayVec<T, N> {
deref_mut(&mut self) -> &mut Self::Target1649 fn deref_mut(&mut self) -> &mut Self::Target {
1650 // SAFETY: `self.len <= N`.
1651 unsafe { self.data.get_unchecked_mut(0..self.len) }
1652 }
1653 }
1654 }
1655