1 // Copyright (c) 2016 The vulkano developers
2 // Licensed under the Apache License, Version 2.0
3 // <LICENSE-APACHE or
4 // https://www.apache.org/licenses/LICENSE-2.0> or the MIT
5 // license <LICENSE-MIT or https://opensource.org/licenses/MIT>,
6 // at your option. All files in the project carrying such
7 // notice may not be copied, modified, or distributed except
8 // according to those terms.
9 
10 //! Location in memory that contains data.
11 //!
12 //! A Vulkan buffer is very similar to a buffer that you would use in programming languages in
13 //! general, in the sense that it is a location in memory that contains data. The difference
14 //! between a Vulkan buffer and a regular buffer is that the content of a Vulkan buffer is
15 //! accessible from the GPU.
16 //!
17 //! Vulkano does not perform any specific marshalling of buffer data. The representation of the
18 //! buffer in memory is identical between the CPU and GPU. Because the Rust compiler is allowed to
19 //! reorder struct fields at will by default when using `#[repr(Rust)]`, it is advised to mark each
20 //! struct requiring imput assembly as `#[repr(C)]`. This forces Rust to follow the standard C
21 //! procedure. Each element is laid out in memory in the order of declaration and aligned to a
22 //! multiple of their alignment.
23 //!
24 //! # Multiple levels of abstraction
25 //!
26 //! - The low-level implementation of a buffer is [`RawBuffer`], which corresponds directly to a
27 //!   `VkBuffer`, and as such doesn't hold onto any memory.
28 //! - [`Buffer`] is a `RawBuffer` with memory bound to it, and with state tracking.
29 //! - [`Subbuffer`] is what you will use most of the time, as it is what all the APIs expect. It is
30 //!   a reference to a portion of a `Buffer`. `Subbuffer` also has a type parameter, which is a
31 //!   hint for how the data in the portion of the buffer is going to be interpreted.
32 //!
33 //! # `Subbuffer` allocation
34 //!
35 //! There are two ways to get a `Subbuffer`:
36 //!
37 //! - By using the functions on `Buffer`, which create a new buffer and memory allocation each
38 //!   time, and give you a `Subbuffer` that has an entire `Buffer` dedicated to it.
39 //! - By using the [`SubbufferAllocator`], which creates `Subbuffer`s by suballocating existing
40 //!   `Buffer`s such that the `Buffer`s can keep being reused.
41 //!
42 //! Which of these you should choose depends on the use case. For example, if you need to upload
43 //! data to the device each frame, then you should use `SubbufferAllocator`. Same goes for if you
44 //! need to download data very frequently, or if you need to allocate a lot of intermediary buffers
45 //! that are only accessed by the device. On the other hand, if you need to upload some data just
46 //! once, or you can keep reusing the same buffer (because its size is unchanging) it's best to
47 //! use a dedicated `Buffer` for that.
48 //!
49 //! # Memory usage
50 //!
51 //! When allocating memory for a buffer, you have to specify a *memory usage*. This tells the
52 //! memory allocator what memory type it should pick for the allocation.
53 //!
54 //! - [`MemoryUsage::DeviceOnly`] will allocate a buffer that's usually located in device-local
55 //!   memory and whose content can't be directly accessed by your application. Accessing this
56 //!   buffer from the device is generally faster compared to accessing a buffer that's located in
57 //!   host-visible memory.
58 //! - [`MemoryUsage::Upload`] and [`MemoryUsage::Download`] both allocate from a host-visible
59 //!   memory type, which means the buffer can be accessed directly from the host. Buffers allocated
60 //!   with these memory usages are needed to get data to and from the device.
61 //!
62 //! Take for example a buffer that is under constant access by the device but you need to read its
63 //! content on the host from time to time, it may be a good idea to use a device-local buffer as
64 //! the main buffer and a host-visible buffer for when you need to read it. Then whenever you need
65 //! to read the main buffer, ask the device to copy from the device-local buffer to the
66 //! host-visible buffer, and read the host-visible buffer instead.
67 //!
68 //! # Buffer usage
69 //!
70 //! When you create a buffer, you have to specify its *usage*. In other words, you have to
71 //! specify the way it is going to be used. Trying to use a buffer in a way that wasn't specified
72 //! when you created it will result in a runtime error.
73 //!
74 //! You can use buffers for the following purposes:
75 //!
76 //! - Can contain arbitrary data that can be transferred from/to other buffers and images.
77 //! - Can be read and modified from a shader.
78 //! - Can be used as a source of vertices and indices.
79 //! - Can be used as a source of list of models for draw indirect commands.
80 //!
81 //! Accessing a buffer from a shader can be done in the following ways:
82 //!
83 //! - As a uniform buffer. Uniform buffers are read-only.
84 //! - As a storage buffer. Storage buffers can be read and written.
85 //! - As a uniform texel buffer. Contrary to a uniform buffer, the data is interpreted by the GPU
86 //!   and can be for example normalized.
87 //! - As a storage texel buffer. Additionally, some data formats can be modified with atomic
88 //!   operations.
89 //!
90 //! Using uniform/storage texel buffers requires creating a *buffer view*. See [the `view` module]
91 //! for how to create a buffer view.
92 //!
93 //! See also [the `shader` module documentation] for information about how buffer contents need to
94 //! be laid out in accordance with the shader interface.
95 //!
96 //! [`RawBuffer`]: self::sys::RawBuffer
97 //! [`SubbufferAllocator`]: self::allocator::SubbufferAllocator
98 //! [`MemoryUsage::DeviceOnly`]: crate::memory::allocator::MemoryUsage::DeviceOnly
99 //! [`MemoryUsage::Upload`]: crate::memory::allocator::MemoryUsage::Upload
100 //! [`MemoryUsage::Download`]: crate::memory::allocator::MemoryUsage::Download
101 //! [the `view` module]: self::view
102 //! [the `shader` module documentation]: crate::shader
103 
104 pub use self::{
105     subbuffer::{BufferContents, BufferContentsLayout, Subbuffer},
106     sys::BufferCreateInfo,
107     usage::BufferUsage,
108 };
109 use self::{
110     subbuffer::{ReadLockError, WriteLockError},
111     sys::RawBuffer,
112 };
113 use crate::{
114     device::{Device, DeviceOwned},
115     macros::vulkan_bitflags,
116     memory::{
117         allocator::{
118             AllocationCreateInfo, AllocationCreationError, AllocationType, DeviceLayout,
119             MemoryAlloc, MemoryAllocator,
120         },
121         is_aligned, DedicatedAllocation, DeviceAlignment, ExternalMemoryHandleType,
122         ExternalMemoryHandleTypes, ExternalMemoryProperties, MemoryRequirements,
123     },
124     range_map::RangeMap,
125     sync::{future::AccessError, CurrentAccess, Sharing},
126     DeviceSize, NonZeroDeviceSize, RequirementNotMet, RequiresOneOf, Version, VulkanError,
127     VulkanObject,
128 };
129 use parking_lot::{Mutex, MutexGuard};
130 use smallvec::SmallVec;
131 use std::{
132     error::Error,
133     fmt::{Display, Error as FmtError, Formatter},
134     hash::{Hash, Hasher},
135     mem::size_of_val,
136     ops::Range,
137     ptr,
138     sync::Arc,
139 };
140 
141 pub mod allocator;
142 pub mod subbuffer;
143 pub mod sys;
144 mod usage;
145 pub mod view;
146 
147 /// A storage for raw bytes.
148 ///
149 /// Unlike [`RawBuffer`], a `Buffer` has memory backing it, and can be used normally.
150 ///
151 /// See [the module-level documentation] for more information about buffers.
152 ///
153 /// # Examples
154 ///
155 /// Sometimes, you need a buffer that is rarely accessed by the host. To get the best performance
156 /// in this case, one should use a buffer in device-local memory, that is inaccessible from the
157 /// host. As such, to initialize or otherwise access such a buffer, we need a *staging buffer*.
158 ///
159 /// The following example outlines the general strategy one may take when initializing a
160 /// device-local buffer.
161 ///
162 /// ```
163 /// use vulkano::{
164 ///     buffer::{BufferUsage, Buffer, BufferCreateInfo},
165 ///     command_buffer::{
166 ///         AutoCommandBufferBuilder, CommandBufferUsage, CopyBufferInfo,
167 ///         PrimaryCommandBufferAbstract,
168 ///     },
169 ///     memory::allocator::{AllocationCreateInfo, MemoryUsage},
170 ///     sync::GpuFuture,
171 ///     DeviceSize,
172 /// };
173 ///
174 /// # let device: std::sync::Arc<vulkano::device::Device> = return;
175 /// # let queue: std::sync::Arc<vulkano::device::Queue> = return;
176 /// # let memory_allocator: vulkano::memory::allocator::StandardMemoryAllocator = return;
177 /// # let command_buffer_allocator: vulkano::command_buffer::allocator::StandardCommandBufferAllocator = return;
178 /// // Simple iterator to construct test data.
179 /// let data = (0..10_000).map(|i| i as f32);
180 ///
181 /// // Create a host-accessible buffer initialized with the data.
182 /// let temporary_accessible_buffer = Buffer::from_iter(
183 ///     &memory_allocator,
184 ///     BufferCreateInfo {
185 ///         // Specify that this buffer will be used as a transfer source.
186 ///         usage: BufferUsage::TRANSFER_SRC,
187 ///         ..Default::default()
188 ///     },
189 ///     AllocationCreateInfo {
190 ///         // Specify use for upload to the device.
191 ///         usage: MemoryUsage::Upload,
192 ///         ..Default::default()
193 ///     },
194 ///     data,
195 /// )
196 /// .unwrap();
197 ///
198 /// // Create a buffer in device-local with enough space for a slice of `10_000` floats.
199 /// let device_local_buffer = Buffer::new_slice::<f32>(
200 ///     &memory_allocator,
201 ///     BufferCreateInfo {
202 ///         // Specify use as a storage buffer and transfer destination.
203 ///         usage: BufferUsage::STORAGE_BUFFER | BufferUsage::TRANSFER_DST,
204 ///         ..Default::default()
205 ///     },
206 ///     AllocationCreateInfo {
207 ///         // Specify use by the device only.
208 ///         usage: MemoryUsage::DeviceOnly,
209 ///         ..Default::default()
210 ///     },
211 ///     10_000 as DeviceSize,
212 /// )
213 /// .unwrap();
214 ///
215 /// // Create a one-time command to copy between the buffers.
216 /// let mut cbb = AutoCommandBufferBuilder::primary(
217 ///     &command_buffer_allocator,
218 ///     queue.queue_family_index(),
219 ///     CommandBufferUsage::OneTimeSubmit,
220 /// )
221 /// .unwrap();
222 /// cbb.copy_buffer(CopyBufferInfo::buffers(
223 ///         temporary_accessible_buffer,
224 ///         device_local_buffer.clone(),
225 ///     ))
226 ///     .unwrap();
227 /// let cb = cbb.build().unwrap();
228 ///
229 /// // Execute the copy command and wait for completion before proceeding.
230 /// cb.execute(queue.clone())
231 ///     .unwrap()
232 ///     .then_signal_fence_and_flush()
233 ///     .unwrap()
234 ///     .wait(None /* timeout */)
235 ///     .unwrap()
236 /// ```
237 ///
238 /// [the module-level documentation]: self
239 #[derive(Debug)]
240 pub struct Buffer {
241     inner: RawBuffer,
242     memory: BufferMemory,
243     state: Mutex<BufferState>,
244 }
245 
246 /// The type of backing memory that a buffer can have.
247 #[derive(Debug)]
248 pub enum BufferMemory {
249     /// The buffer is backed by normal memory, bound with [`bind_memory`].
250     ///
251     /// [`bind_memory`]: RawBuffer::bind_memory
252     Normal(MemoryAlloc),
253 
254     /// The buffer is backed by sparse memory, bound with [`bind_sparse`].
255     ///
256     /// [`bind_sparse`]: crate::device::QueueGuard::bind_sparse
257     Sparse,
258 }
259 
260 impl Buffer {
261     /// Creates a new `Buffer` and writes `data` in it. Returns a [`Subbuffer`] spanning the whole
262     /// buffer.
263     ///
264     /// This only works with memory types that are host-visible. If you want to upload data to a
265     /// buffer allocated in device-local memory, you will need to create a staging buffer and copy
266     /// the contents over.
267     ///
268     /// > **Note**: You should **not** set the `buffer_info.size` field. The function does that
269     /// > itself.
270     ///
271     /// # Panics
272     ///
273     /// - Panics if `T` has zero size.
274     /// - Panics if `T` has an alignment greater than `64`.
from_data<T>( allocator: &(impl MemoryAllocator + ?Sized), buffer_info: BufferCreateInfo, allocation_info: AllocationCreateInfo, data: T, ) -> Result<Subbuffer<T>, BufferError> where T: BufferContents,275     pub fn from_data<T>(
276         allocator: &(impl MemoryAllocator + ?Sized),
277         buffer_info: BufferCreateInfo,
278         allocation_info: AllocationCreateInfo,
279         data: T,
280     ) -> Result<Subbuffer<T>, BufferError>
281     where
282         T: BufferContents,
283     {
284         let buffer = Buffer::new_sized(allocator, buffer_info, allocation_info)?;
285 
286         unsafe { ptr::write(&mut *buffer.write()?, data) };
287 
288         Ok(buffer)
289     }
290 
291     /// Creates a new `Buffer` and writes all elements of `iter` in it. Returns a [`Subbuffer`]
292     /// spanning the whole buffer.
293     ///
294     /// This only works with memory types that are host-visible. If you want to upload data to a
295     /// buffer allocated in device-local memory, you will need to create a staging buffer and copy
296     /// the contents over.
297     ///
298     /// > **Note**: You should **not** set the `buffer_info.size` field. The function does that
299     /// > itself.
300     ///
301     /// # Panics
302     ///
303     /// - Panics if `iter` is empty.
from_iter<T, I>( allocator: &(impl MemoryAllocator + ?Sized), buffer_info: BufferCreateInfo, allocation_info: AllocationCreateInfo, iter: I, ) -> Result<Subbuffer<[T]>, BufferError> where T: BufferContents, I: IntoIterator<Item = T>, I::IntoIter: ExactSizeIterator,304     pub fn from_iter<T, I>(
305         allocator: &(impl MemoryAllocator + ?Sized),
306         buffer_info: BufferCreateInfo,
307         allocation_info: AllocationCreateInfo,
308         iter: I,
309     ) -> Result<Subbuffer<[T]>, BufferError>
310     where
311         T: BufferContents,
312         I: IntoIterator<Item = T>,
313         I::IntoIter: ExactSizeIterator,
314     {
315         let iter = iter.into_iter();
316         let buffer = Buffer::new_slice(
317             allocator,
318             buffer_info,
319             allocation_info,
320             iter.len().try_into().unwrap(),
321         )?;
322 
323         for (o, i) in buffer.write()?.iter_mut().zip(iter) {
324             unsafe { ptr::write(o, i) };
325         }
326 
327         Ok(buffer)
328     }
329 
330     /// Creates a new uninitialized `Buffer` for sized data. Returns a [`Subbuffer`] spanning the
331     /// whole buffer.
332     ///
333     /// > **Note**: You should **not** set the `buffer_info.size` field. The function does that
334     /// > itself.
new_sized<T>( allocator: &(impl MemoryAllocator + ?Sized), buffer_info: BufferCreateInfo, allocation_info: AllocationCreateInfo, ) -> Result<Subbuffer<T>, BufferError> where T: BufferContents,335     pub fn new_sized<T>(
336         allocator: &(impl MemoryAllocator + ?Sized),
337         buffer_info: BufferCreateInfo,
338         allocation_info: AllocationCreateInfo,
339     ) -> Result<Subbuffer<T>, BufferError>
340     where
341         T: BufferContents,
342     {
343         let layout = T::LAYOUT.unwrap_sized();
344         let buffer = Subbuffer::new(Buffer::new(
345             allocator,
346             buffer_info,
347             allocation_info,
348             layout,
349         )?);
350 
351         Ok(unsafe { buffer.reinterpret_unchecked() })
352     }
353 
354     /// Creates a new uninitialized `Buffer` for a slice. Returns a [`Subbuffer`] spanning the
355     /// whole buffer.
356     ///
357     /// > **Note**: You should **not** set the `buffer_info.size` field. The function does that
358     /// > itself.
359     ///
360     /// # Panics
361     ///
362     /// - Panics if `len` is zero.
new_slice<T>( allocator: &(impl MemoryAllocator + ?Sized), buffer_info: BufferCreateInfo, allocation_info: AllocationCreateInfo, len: DeviceSize, ) -> Result<Subbuffer<[T]>, BufferError> where T: BufferContents,363     pub fn new_slice<T>(
364         allocator: &(impl MemoryAllocator + ?Sized),
365         buffer_info: BufferCreateInfo,
366         allocation_info: AllocationCreateInfo,
367         len: DeviceSize,
368     ) -> Result<Subbuffer<[T]>, BufferError>
369     where
370         T: BufferContents,
371     {
372         Buffer::new_unsized(allocator, buffer_info, allocation_info, len)
373     }
374 
375     /// Creates a new uninitialized `Buffer` for unsized data. Returns a [`Subbuffer`] spanning the
376     /// whole buffer.
377     ///
378     /// > **Note**: You should **not** set the `buffer_info.size` field. The function does that
379     /// > itself.
380     ///
381     /// # Panics
382     ///
383     /// - Panics if `len` is zero.
new_unsized<T>( allocator: &(impl MemoryAllocator + ?Sized), buffer_info: BufferCreateInfo, allocation_info: AllocationCreateInfo, len: DeviceSize, ) -> Result<Subbuffer<T>, BufferError> where T: BufferContents + ?Sized,384     pub fn new_unsized<T>(
385         allocator: &(impl MemoryAllocator + ?Sized),
386         buffer_info: BufferCreateInfo,
387         allocation_info: AllocationCreateInfo,
388         len: DeviceSize,
389     ) -> Result<Subbuffer<T>, BufferError>
390     where
391         T: BufferContents + ?Sized,
392     {
393         let len = NonZeroDeviceSize::new(len).expect("empty slices are not valid buffer contents");
394         let layout = T::LAYOUT.layout_for_len(len).unwrap();
395         let buffer = Subbuffer::new(Buffer::new(
396             allocator,
397             buffer_info,
398             allocation_info,
399             layout,
400         )?);
401 
402         Ok(unsafe { buffer.reinterpret_unchecked() })
403     }
404 
405     /// Creates a new uninitialized `Buffer` with the given `layout`.
406     ///
407     /// > **Note**: You should **not** set the `buffer_info.size` field. The function does that
408     /// > itself.
409     ///
410     /// # Panics
411     ///
412     /// - Panics if `layout.alignment()` is greater than 64.
new( allocator: &(impl MemoryAllocator + ?Sized), mut buffer_info: BufferCreateInfo, allocation_info: AllocationCreateInfo, layout: DeviceLayout, ) -> Result<Arc<Self>, BufferError>413     pub fn new(
414         allocator: &(impl MemoryAllocator + ?Sized),
415         mut buffer_info: BufferCreateInfo,
416         allocation_info: AllocationCreateInfo,
417         layout: DeviceLayout,
418     ) -> Result<Arc<Self>, BufferError> {
419         assert!(layout.alignment().as_devicesize() <= 64);
420         // TODO: Enable once sparse binding materializes
421         // assert!(!allocate_info.flags.contains(BufferCreateFlags::SPARSE_BINDING));
422 
423         assert!(
424             buffer_info.size == 0,
425             "`Buffer::new*` functions set the `buffer_info.size` field themselves, you should not \
426             set it yourself",
427         );
428 
429         buffer_info.size = layout.size();
430 
431         let raw_buffer = RawBuffer::new(allocator.device().clone(), buffer_info)?;
432         let mut requirements = *raw_buffer.memory_requirements();
433         requirements.layout = requirements.layout.align_to(layout.alignment()).unwrap();
434 
435         let mut allocation = unsafe {
436             allocator.allocate_unchecked(
437                 requirements,
438                 AllocationType::Linear,
439                 allocation_info,
440                 Some(DedicatedAllocation::Buffer(&raw_buffer)),
441             )
442         }?;
443         debug_assert!(is_aligned(
444             allocation.offset(),
445             requirements.layout.alignment(),
446         ));
447         debug_assert!(allocation.size() == requirements.layout.size());
448 
449         // The implementation might require a larger size than we wanted. With this it is easier to
450         // invalidate and flush the whole buffer. It does not affect the allocation in any way.
451         allocation.shrink(layout.size());
452 
453         unsafe { raw_buffer.bind_memory_unchecked(allocation) }
454             .map(Arc::new)
455             .map_err(|(err, _, _)| err.into())
456     }
457 
from_raw(inner: RawBuffer, memory: BufferMemory) -> Self458     fn from_raw(inner: RawBuffer, memory: BufferMemory) -> Self {
459         let state = Mutex::new(BufferState::new(inner.size()));
460 
461         Buffer {
462             inner,
463             memory,
464             state,
465         }
466     }
467 
468     /// Returns the type of memory that is backing this buffer.
469     #[inline]
memory(&self) -> &BufferMemory470     pub fn memory(&self) -> &BufferMemory {
471         &self.memory
472     }
473 
474     /// Returns the memory requirements for this buffer.
475     #[inline]
memory_requirements(&self) -> &MemoryRequirements476     pub fn memory_requirements(&self) -> &MemoryRequirements {
477         self.inner.memory_requirements()
478     }
479 
480     /// Returns the flags the buffer was created with.
481     #[inline]
flags(&self) -> BufferCreateFlags482     pub fn flags(&self) -> BufferCreateFlags {
483         self.inner.flags()
484     }
485 
486     /// Returns the size of the buffer in bytes.
487     #[inline]
size(&self) -> DeviceSize488     pub fn size(&self) -> DeviceSize {
489         self.inner.size()
490     }
491 
492     /// Returns the usage the buffer was created with.
493     #[inline]
usage(&self) -> BufferUsage494     pub fn usage(&self) -> BufferUsage {
495         self.inner.usage()
496     }
497 
498     /// Returns the sharing the buffer was created with.
499     #[inline]
sharing(&self) -> &Sharing<SmallVec<[u32; 4]>>500     pub fn sharing(&self) -> &Sharing<SmallVec<[u32; 4]>> {
501         self.inner.sharing()
502     }
503 
504     /// Returns the external memory handle types that are supported with this buffer.
505     #[inline]
external_memory_handle_types(&self) -> ExternalMemoryHandleTypes506     pub fn external_memory_handle_types(&self) -> ExternalMemoryHandleTypes {
507         self.inner.external_memory_handle_types()
508     }
509 
510     /// Returns the device address for this buffer.
511     // TODO: Caching?
device_address(&self) -> Result<NonZeroDeviceSize, BufferError>512     pub fn device_address(&self) -> Result<NonZeroDeviceSize, BufferError> {
513         let device = self.device();
514 
515         // VUID-vkGetBufferDeviceAddress-bufferDeviceAddress-03324
516         if !device.enabled_features().buffer_device_address {
517             return Err(BufferError::RequirementNotMet {
518                 required_for: "`Buffer::device_address`",
519                 requires_one_of: RequiresOneOf {
520                     features: &["buffer_device_address"],
521                     ..Default::default()
522                 },
523             });
524         }
525 
526         // VUID-VkBufferDeviceAddressInfo-buffer-02601
527         if !self.usage().intersects(BufferUsage::SHADER_DEVICE_ADDRESS) {
528             return Err(BufferError::BufferMissingUsage);
529         }
530 
531         let info = ash::vk::BufferDeviceAddressInfo {
532             buffer: self.handle(),
533             ..Default::default()
534         };
535         let fns = device.fns();
536         let f = if device.api_version() >= Version::V1_2 {
537             fns.v1_2.get_buffer_device_address
538         } else if device.enabled_extensions().khr_buffer_device_address {
539             fns.khr_buffer_device_address.get_buffer_device_address_khr
540         } else {
541             fns.ext_buffer_device_address.get_buffer_device_address_ext
542         };
543         let ptr = unsafe { f(device.handle(), &info) };
544 
545         Ok(NonZeroDeviceSize::new(ptr).unwrap())
546     }
547 
state(&self) -> MutexGuard<'_, BufferState>548     pub(crate) fn state(&self) -> MutexGuard<'_, BufferState> {
549         self.state.lock()
550     }
551 }
552 
553 unsafe impl VulkanObject for Buffer {
554     type Handle = ash::vk::Buffer;
555 
556     #[inline]
handle(&self) -> Self::Handle557     fn handle(&self) -> Self::Handle {
558         self.inner.handle()
559     }
560 }
561 
562 unsafe impl DeviceOwned for Buffer {
563     #[inline]
device(&self) -> &Arc<Device>564     fn device(&self) -> &Arc<Device> {
565         self.inner.device()
566     }
567 }
568 
569 impl PartialEq for Buffer {
570     #[inline]
eq(&self, other: &Self) -> bool571     fn eq(&self, other: &Self) -> bool {
572         self.inner == other.inner
573     }
574 }
575 
576 impl Eq for Buffer {}
577 
578 impl Hash for Buffer {
hash<H: Hasher>(&self, state: &mut H)579     fn hash<H: Hasher>(&self, state: &mut H) {
580         self.inner.hash(state);
581     }
582 }
583 
584 /// The current state of a buffer.
585 #[derive(Debug)]
586 pub(crate) struct BufferState {
587     ranges: RangeMap<DeviceSize, BufferRangeState>,
588 }
589 
590 impl BufferState {
new(size: DeviceSize) -> Self591     fn new(size: DeviceSize) -> Self {
592         BufferState {
593             ranges: [(
594                 0..size,
595                 BufferRangeState {
596                     current_access: CurrentAccess::Shared {
597                         cpu_reads: 0,
598                         gpu_reads: 0,
599                     },
600                 },
601             )]
602             .into_iter()
603             .collect(),
604         }
605     }
606 
check_cpu_read(&self, range: Range<DeviceSize>) -> Result<(), ReadLockError>607     pub(crate) fn check_cpu_read(&self, range: Range<DeviceSize>) -> Result<(), ReadLockError> {
608         for (_range, state) in self.ranges.range(&range) {
609             match &state.current_access {
610                 CurrentAccess::CpuExclusive { .. } => return Err(ReadLockError::CpuWriteLocked),
611                 CurrentAccess::GpuExclusive { .. } => return Err(ReadLockError::GpuWriteLocked),
612                 CurrentAccess::Shared { .. } => (),
613             }
614         }
615 
616         Ok(())
617     }
618 
cpu_read_lock(&mut self, range: Range<DeviceSize>)619     pub(crate) unsafe fn cpu_read_lock(&mut self, range: Range<DeviceSize>) {
620         self.ranges.split_at(&range.start);
621         self.ranges.split_at(&range.end);
622 
623         for (_range, state) in self.ranges.range_mut(&range) {
624             match &mut state.current_access {
625                 CurrentAccess::Shared { cpu_reads, .. } => {
626                     *cpu_reads += 1;
627                 }
628                 _ => unreachable!("Buffer is being written by the CPU or GPU"),
629             }
630         }
631     }
632 
cpu_read_unlock(&mut self, range: Range<DeviceSize>)633     pub(crate) unsafe fn cpu_read_unlock(&mut self, range: Range<DeviceSize>) {
634         self.ranges.split_at(&range.start);
635         self.ranges.split_at(&range.end);
636 
637         for (_range, state) in self.ranges.range_mut(&range) {
638             match &mut state.current_access {
639                 CurrentAccess::Shared { cpu_reads, .. } => *cpu_reads -= 1,
640                 _ => unreachable!("Buffer was not locked for CPU read"),
641             }
642         }
643     }
644 
check_cpu_write(&self, range: Range<DeviceSize>) -> Result<(), WriteLockError>645     pub(crate) fn check_cpu_write(&self, range: Range<DeviceSize>) -> Result<(), WriteLockError> {
646         for (_range, state) in self.ranges.range(&range) {
647             match &state.current_access {
648                 CurrentAccess::CpuExclusive => return Err(WriteLockError::CpuLocked),
649                 CurrentAccess::GpuExclusive { .. } => return Err(WriteLockError::GpuLocked),
650                 CurrentAccess::Shared {
651                     cpu_reads: 0,
652                     gpu_reads: 0,
653                 } => (),
654                 CurrentAccess::Shared { cpu_reads, .. } if *cpu_reads > 0 => {
655                     return Err(WriteLockError::CpuLocked)
656                 }
657                 CurrentAccess::Shared { .. } => return Err(WriteLockError::GpuLocked),
658             }
659         }
660 
661         Ok(())
662     }
663 
cpu_write_lock(&mut self, range: Range<DeviceSize>)664     pub(crate) unsafe fn cpu_write_lock(&mut self, range: Range<DeviceSize>) {
665         self.ranges.split_at(&range.start);
666         self.ranges.split_at(&range.end);
667 
668         for (_range, state) in self.ranges.range_mut(&range) {
669             state.current_access = CurrentAccess::CpuExclusive;
670         }
671     }
672 
cpu_write_unlock(&mut self, range: Range<DeviceSize>)673     pub(crate) unsafe fn cpu_write_unlock(&mut self, range: Range<DeviceSize>) {
674         self.ranges.split_at(&range.start);
675         self.ranges.split_at(&range.end);
676 
677         for (_range, state) in self.ranges.range_mut(&range) {
678             match &mut state.current_access {
679                 CurrentAccess::CpuExclusive => {
680                     state.current_access = CurrentAccess::Shared {
681                         cpu_reads: 0,
682                         gpu_reads: 0,
683                     }
684                 }
685                 _ => unreachable!("Buffer was not locked for CPU write"),
686             }
687         }
688     }
689 
check_gpu_read(&self, range: Range<DeviceSize>) -> Result<(), AccessError>690     pub(crate) fn check_gpu_read(&self, range: Range<DeviceSize>) -> Result<(), AccessError> {
691         for (_range, state) in self.ranges.range(&range) {
692             match &state.current_access {
693                 CurrentAccess::Shared { .. } => (),
694                 _ => return Err(AccessError::AlreadyInUse),
695             }
696         }
697 
698         Ok(())
699     }
700 
gpu_read_lock(&mut self, range: Range<DeviceSize>)701     pub(crate) unsafe fn gpu_read_lock(&mut self, range: Range<DeviceSize>) {
702         self.ranges.split_at(&range.start);
703         self.ranges.split_at(&range.end);
704 
705         for (_range, state) in self.ranges.range_mut(&range) {
706             match &mut state.current_access {
707                 CurrentAccess::GpuExclusive { gpu_reads, .. }
708                 | CurrentAccess::Shared { gpu_reads, .. } => *gpu_reads += 1,
709                 _ => unreachable!("Buffer is being written by the CPU"),
710             }
711         }
712     }
713 
gpu_read_unlock(&mut self, range: Range<DeviceSize>)714     pub(crate) unsafe fn gpu_read_unlock(&mut self, range: Range<DeviceSize>) {
715         self.ranges.split_at(&range.start);
716         self.ranges.split_at(&range.end);
717 
718         for (_range, state) in self.ranges.range_mut(&range) {
719             match &mut state.current_access {
720                 CurrentAccess::GpuExclusive { gpu_reads, .. } => *gpu_reads -= 1,
721                 CurrentAccess::Shared { gpu_reads, .. } => *gpu_reads -= 1,
722                 _ => unreachable!("Buffer was not locked for GPU read"),
723             }
724         }
725     }
726 
check_gpu_write(&self, range: Range<DeviceSize>) -> Result<(), AccessError>727     pub(crate) fn check_gpu_write(&self, range: Range<DeviceSize>) -> Result<(), AccessError> {
728         for (_range, state) in self.ranges.range(&range) {
729             match &state.current_access {
730                 CurrentAccess::Shared {
731                     cpu_reads: 0,
732                     gpu_reads: 0,
733                 } => (),
734                 _ => return Err(AccessError::AlreadyInUse),
735             }
736         }
737 
738         Ok(())
739     }
740 
gpu_write_lock(&mut self, range: Range<DeviceSize>)741     pub(crate) unsafe fn gpu_write_lock(&mut self, range: Range<DeviceSize>) {
742         self.ranges.split_at(&range.start);
743         self.ranges.split_at(&range.end);
744 
745         for (_range, state) in self.ranges.range_mut(&range) {
746             match &mut state.current_access {
747                 CurrentAccess::GpuExclusive { gpu_writes, .. } => *gpu_writes += 1,
748                 &mut CurrentAccess::Shared {
749                     cpu_reads: 0,
750                     gpu_reads,
751                 } => {
752                     state.current_access = CurrentAccess::GpuExclusive {
753                         gpu_reads,
754                         gpu_writes: 1,
755                     }
756                 }
757                 _ => unreachable!("Buffer is being accessed by the CPU"),
758             }
759         }
760     }
761 
gpu_write_unlock(&mut self, range: Range<DeviceSize>)762     pub(crate) unsafe fn gpu_write_unlock(&mut self, range: Range<DeviceSize>) {
763         self.ranges.split_at(&range.start);
764         self.ranges.split_at(&range.end);
765 
766         for (_range, state) in self.ranges.range_mut(&range) {
767             match &mut state.current_access {
768                 &mut CurrentAccess::GpuExclusive {
769                     gpu_reads,
770                     gpu_writes: 1,
771                 } => {
772                     state.current_access = CurrentAccess::Shared {
773                         cpu_reads: 0,
774                         gpu_reads,
775                     }
776                 }
777                 CurrentAccess::GpuExclusive { gpu_writes, .. } => *gpu_writes -= 1,
778                 _ => unreachable!("Buffer was not locked for GPU write"),
779             }
780         }
781     }
782 }
783 
784 /// The current state of a specific range of bytes in a buffer.
785 #[derive(Clone, Copy, Debug, PartialEq, Eq)]
786 struct BufferRangeState {
787     current_access: CurrentAccess,
788 }
789 
790 /// Error that can happen in buffer functions.
791 #[derive(Clone, Debug, PartialEq, Eq)]
792 pub enum BufferError {
793     VulkanError(VulkanError),
794 
795     /// Allocating memory failed.
796     AllocError(AllocationCreationError),
797 
798     RequirementNotMet {
799         required_for: &'static str,
800         requires_one_of: RequiresOneOf,
801     },
802 
803     /// The buffer is missing the `SHADER_DEVICE_ADDRESS` usage.
804     BufferMissingUsage,
805 
806     /// The memory was created dedicated to a resource, but not to this buffer.
807     DedicatedAllocationMismatch,
808 
809     /// A dedicated allocation is required for this buffer, but one was not provided.
810     DedicatedAllocationRequired,
811 
812     /// The host is already using this buffer in a way that is incompatible with the
813     /// requested access.
814     InUseByHost,
815 
816     /// The device is already using this buffer in a way that is incompatible with the
817     /// requested access.
818     InUseByDevice,
819 
820     /// The specified size exceeded the value of the `max_buffer_size` limit.
821     MaxBufferSizeExceeded {
822         size: DeviceSize,
823         max: DeviceSize,
824     },
825 
826     /// The offset of the allocation does not have the required alignment.
827     MemoryAllocationNotAligned {
828         allocation_offset: DeviceSize,
829         required_alignment: DeviceAlignment,
830     },
831 
832     /// The size of the allocation is smaller than what is required.
833     MemoryAllocationTooSmall {
834         allocation_size: DeviceSize,
835         required_size: DeviceSize,
836     },
837 
838     /// The buffer was created with the `SHADER_DEVICE_ADDRESS` usage, but the memory does not
839     /// support this usage.
840     MemoryBufferDeviceAddressNotSupported,
841 
842     /// The memory was created with export handle types, but none of these handle types were
843     /// enabled on the buffer.
844     MemoryExternalHandleTypesDisjoint {
845         buffer_handle_types: ExternalMemoryHandleTypes,
846         memory_export_handle_types: ExternalMemoryHandleTypes,
847     },
848 
849     /// The memory was created with an import, but the import's handle type was not enabled on
850     /// the buffer.
851     MemoryImportedHandleTypeNotEnabled {
852         buffer_handle_types: ExternalMemoryHandleTypes,
853         memory_imported_handle_type: ExternalMemoryHandleType,
854     },
855 
856     /// The memory backing this buffer is not visible to the host.
857     MemoryNotHostVisible,
858 
859     /// The protection of buffer and memory are not equal.
860     MemoryProtectedMismatch {
861         buffer_protected: bool,
862         memory_protected: bool,
863     },
864 
865     /// The provided memory type is not one of the allowed memory types that can be bound to this
866     /// buffer.
867     MemoryTypeNotAllowed {
868         provided_memory_type_index: u32,
869         allowed_memory_type_bits: u32,
870     },
871 
872     /// The sharing mode was set to `Concurrent`, but one of the specified queue family indices was
873     /// out of range.
874     SharingQueueFamilyIndexOutOfRange {
875         queue_family_index: u32,
876         queue_family_count: u32,
877     },
878 }
879 
880 impl Error for BufferError {
source(&self) -> Option<&(dyn Error + 'static)>881     fn source(&self) -> Option<&(dyn Error + 'static)> {
882         match self {
883             Self::VulkanError(err) => Some(err),
884             Self::AllocError(err) => Some(err),
885             _ => None,
886         }
887     }
888 }
889 
890 impl Display for BufferError {
fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError>891     fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
892         match self {
893             Self::VulkanError(_) => write!(f, "a runtime error occurred"),
894             Self::AllocError(_) => write!(f, "allocating memory failed"),
895             Self::RequirementNotMet {
896                 required_for,
897                 requires_one_of,
898             } => write!(
899                 f,
900                 "a requirement was not met for: {}; requires one of: {}",
901                 required_for, requires_one_of,
902             ),
903             Self::BufferMissingUsage => {
904                 write!(f, "the buffer is missing the `SHADER_DEVICE_ADDRESS` usage")
905             }
906             Self::DedicatedAllocationMismatch => write!(
907                 f,
908                 "the memory was created dedicated to a resource, but not to this buffer",
909             ),
910             Self::DedicatedAllocationRequired => write!(
911                 f,
912                 "a dedicated allocation is required for this buffer, but one was not provided"
913             ),
914             Self::InUseByHost => write!(
915                 f,
916                 "the host is already using this buffer in a way that is incompatible with the \
917                 requested access",
918             ),
919             Self::InUseByDevice => write!(
920                 f,
921                 "the device is already using this buffer in a way that is incompatible with the \
922                 requested access"
923             ),
924             Self::MaxBufferSizeExceeded { .. } => write!(
925                 f,
926                 "the specified size exceeded the value of the `max_buffer_size` limit",
927             ),
928             Self::MemoryAllocationNotAligned {
929                 allocation_offset,
930                 required_alignment,
931             } => write!(
932                 f,
933                 "the offset of the allocation ({}) does not have the required alignment ({:?})",
934                 allocation_offset, required_alignment,
935             ),
936             Self::MemoryAllocationTooSmall {
937                 allocation_size,
938                 required_size,
939             } => write!(
940                 f,
941                 "the size of the allocation ({}) is smaller than what is required ({})",
942                 allocation_size, required_size,
943             ),
944             Self::MemoryBufferDeviceAddressNotSupported => write!(
945                 f,
946                 "the buffer was created with the `SHADER_DEVICE_ADDRESS` usage, but the memory \
947                 does not support this usage",
948             ),
949             Self::MemoryExternalHandleTypesDisjoint { .. } => write!(
950                 f,
951                 "the memory was created with export handle types, but none of these handle types \
952                 were enabled on the buffer",
953             ),
954             Self::MemoryImportedHandleTypeNotEnabled { .. } => write!(
955                 f,
956                 "the memory was created with an import, but the import's handle type was not \
957                 enabled on the buffer",
958             ),
959             Self::MemoryNotHostVisible => write!(
960                 f,
961                 "the memory backing this buffer is not visible to the host",
962             ),
963             Self::MemoryProtectedMismatch {
964                 buffer_protected,
965                 memory_protected,
966             } => write!(
967                 f,
968                 "the protection of buffer ({}) and memory ({}) are not equal",
969                 buffer_protected, memory_protected,
970             ),
971             Self::MemoryTypeNotAllowed {
972                 provided_memory_type_index,
973                 allowed_memory_type_bits,
974             } => write!(
975                 f,
976                 "the provided memory type ({}) is not one of the allowed memory types (",
977                 provided_memory_type_index,
978             )
979             .and_then(|_| {
980                 let mut first = true;
981 
982                 for i in (0..size_of_val(allowed_memory_type_bits))
983                     .filter(|i| allowed_memory_type_bits & (1 << i) != 0)
984                 {
985                     if first {
986                         write!(f, "{}", i)?;
987                         first = false;
988                     } else {
989                         write!(f, ", {}", i)?;
990                     }
991                 }
992 
993                 Ok(())
994             })
995             .and_then(|_| write!(f, ") that can be bound to this buffer")),
996             Self::SharingQueueFamilyIndexOutOfRange { .. } => write!(
997                 f,
998                 "the sharing mode was set to `Concurrent`, but one of the specified queue family \
999                 indices was out of range",
1000             ),
1001         }
1002     }
1003 }
1004 
1005 impl From<VulkanError> for BufferError {
from(err: VulkanError) -> Self1006     fn from(err: VulkanError) -> Self {
1007         Self::VulkanError(err)
1008     }
1009 }
1010 
1011 impl From<AllocationCreationError> for BufferError {
from(err: AllocationCreationError) -> Self1012     fn from(err: AllocationCreationError) -> Self {
1013         Self::AllocError(err)
1014     }
1015 }
1016 
1017 impl From<RequirementNotMet> for BufferError {
from(err: RequirementNotMet) -> Self1018     fn from(err: RequirementNotMet) -> Self {
1019         Self::RequirementNotMet {
1020             required_for: err.required_for,
1021             requires_one_of: err.requires_one_of,
1022         }
1023     }
1024 }
1025 
1026 impl From<ReadLockError> for BufferError {
from(err: ReadLockError) -> Self1027     fn from(err: ReadLockError) -> Self {
1028         match err {
1029             ReadLockError::CpuWriteLocked => Self::InUseByHost,
1030             ReadLockError::GpuWriteLocked => Self::InUseByDevice,
1031         }
1032     }
1033 }
1034 
1035 impl From<WriteLockError> for BufferError {
from(err: WriteLockError) -> Self1036     fn from(err: WriteLockError) -> Self {
1037         match err {
1038             WriteLockError::CpuLocked => Self::InUseByHost,
1039             WriteLockError::GpuLocked => Self::InUseByDevice,
1040         }
1041     }
1042 }
1043 
1044 vulkan_bitflags! {
1045     #[non_exhaustive]
1046 
1047     /// Flags to be set when creating a buffer.
1048     BufferCreateFlags = BufferCreateFlags(u32);
1049 
1050     /* TODO: enable
1051     /// The buffer will be backed by sparse memory binding (through queue commands) instead of
1052     /// regular binding (through [`bind_memory`]).
1053     ///
1054     /// The [`sparse_binding`] feature must be enabled on the device.
1055     ///
1056     /// [`bind_memory`]: sys::RawBuffer::bind_memory
1057     /// [`sparse_binding`]: crate::device::Features::sparse_binding
1058     SPARSE_BINDING = SPARSE_BINDING,*/
1059 
1060     /* TODO: enable
1061     /// The buffer can be used without being fully resident in memory at the time of use.
1062     ///
1063     /// This requires the `sparse_binding` flag as well.
1064     ///
1065     /// The [`sparse_residency_buffer`] feature must be enabled on the device.
1066     ///
1067     /// [`sparse_residency_buffer`]: crate::device::Features::sparse_residency_buffer
1068     SPARSE_RESIDENCY = SPARSE_RESIDENCY,*/
1069 
1070     /* TODO: enable
1071     /// The buffer's memory can alias with another buffer or a different part of the same buffer.
1072     ///
1073     /// This requires the `sparse_binding` flag as well.
1074     ///
1075     /// The [`sparse_residency_aliased`] feature must be enabled on the device.
1076     ///
1077     /// [`sparse_residency_aliased`]: crate::device::Features::sparse_residency_aliased
1078     SPARSE_ALIASED = SPARSE_ALIASED,*/
1079 
1080     /* TODO: enable
1081     /// The buffer is protected, and can only be used in combination with protected memory and other
1082     /// protected objects.
1083     ///
1084     /// The device API version must be at least 1.1.
1085     PROTECTED = PROTECTED {
1086         api_version: V1_1,
1087     },*/
1088 
1089     /* TODO: enable
1090     /// The buffer's device address can be saved and reused on a subsequent run.
1091     ///
1092     /// The device API version must be at least 1.2, or either the [`khr_buffer_device_address`] or
1093     /// [`ext_buffer_device_address`] extension must be enabled on the device.
1094     DEVICE_ADDRESS_CAPTURE_REPLAY = DEVICE_ADDRESS_CAPTURE_REPLAY {
1095         api_version: V1_2,
1096         device_extensions: [khr_buffer_device_address, ext_buffer_device_address],
1097     },*/
1098 }
1099 
1100 /// The buffer configuration to query in
1101 /// [`PhysicalDevice::external_buffer_properties`](crate::device::physical::PhysicalDevice::external_buffer_properties).
1102 #[derive(Clone, Debug, PartialEq, Eq, Hash)]
1103 pub struct ExternalBufferInfo {
1104     /// The external handle type that will be used with the buffer.
1105     pub handle_type: ExternalMemoryHandleType,
1106 
1107     /// The usage that the buffer will have.
1108     pub usage: BufferUsage,
1109 
1110     /// The sparse binding parameters that will be used.
1111     pub sparse: Option<BufferCreateFlags>,
1112 
1113     pub _ne: crate::NonExhaustive,
1114 }
1115 
1116 impl ExternalBufferInfo {
1117     /// Returns an `ExternalBufferInfo` with the specified `handle_type`.
1118     #[inline]
handle_type(handle_type: ExternalMemoryHandleType) -> Self1119     pub fn handle_type(handle_type: ExternalMemoryHandleType) -> Self {
1120         Self {
1121             handle_type,
1122             usage: BufferUsage::empty(),
1123             sparse: None,
1124             _ne: crate::NonExhaustive(()),
1125         }
1126     }
1127 }
1128 
1129 /// The external memory properties supported for buffers with a given configuration.
1130 #[derive(Clone, Debug)]
1131 #[non_exhaustive]
1132 pub struct ExternalBufferProperties {
1133     /// The properties for external memory.
1134     pub external_memory_properties: ExternalMemoryProperties,
1135 }
1136