pub struct CpuWriteGpuReadBuffer<T: Pod + Send + Sync> {
write_view: BufferViewMut<'static>,
unwritten_element_range: Range<usize>,
chunk_buffer: Arc<DynamicResource<GpuBufferHandle, BufferDesc, Buffer>>,
byte_offset_in_chunk_buffer: BufferAddress,
_type: PhantomData<T>,
}
Expand description
A sub-allocated staging buffer that can be written to.
Behaves a bit like a fixed sized Vec
in that far it keeps track of how many elements were written to it.
We do not allow reading from this buffer as it is typically write-combined memory. Reading would work, but it can be very slow. For details on why, see Write combining is not your friend, by Fabian Giesen Note that the “vec like behavior” further encourages
- not leaving holes
- keeping writes sequential
Fields§
§write_view: BufferViewMut<'static>
Write view into the relevant buffer portion.
UNSAFE: The lifetime is transmuted to be 'static
.
In actuality it is tied to the lifetime of chunk_buffer
!
unwritten_element_range: Range<usize>
Range in T elements in write_view
that haven’t been written yet.
chunk_buffer: Arc<DynamicResource<GpuBufferHandle, BufferDesc, Buffer>>
§byte_offset_in_chunk_buffer: BufferAddress
§_type: PhantomData<T>
Marker for the type whose alignment and size requirements are honored by write_view
.
Implementations§
source§impl<T> CpuWriteGpuReadBuffer<T>
impl<T> CpuWriteGpuReadBuffer<T>
sourcefn as_mut_byte_slice(&mut self) -> &mut [u8] ⓘ
fn as_mut_byte_slice(&mut self) -> &mut [u8] ⓘ
Memory as slice.
Note that we can’t rely on any alignment guarantees here! We could offset the mapped CPU-sided memory, but then the GPU offset won’t be aligned anymore. There’s no way we can meet conflicting alignment requirements, so we need to work with unaligned bytes instead. See this comment on this wgpu issue about what we tried before.
Once wgpu has some alignment guarantees, we might be able to use this here to allow faster copies!
(copies of larger blocks are likely less affected as memcpy
typically does dynamic check/dispatching for SIMD based copies)
Do not make this public as we need to guarantee that the memory is never read from!
sourcepub fn extend_from_slice(
&mut self,
elements: &[T]
) -> Result<(), CpuWriteGpuReadError>
pub fn extend_from_slice( &mut self, elements: &[T] ) -> Result<(), CpuWriteGpuReadError>
Pushes a slice of elements into the buffer.
If the buffer is not big enough, only the first self.remaining_capacity()
elements are pushed before returning an error.
sourcepub fn extend(
&mut self,
elements: impl ExactSizeIterator<Item = T>
) -> Result<usize, CpuWriteGpuReadError>
pub fn extend( &mut self, elements: impl ExactSizeIterator<Item = T> ) -> Result<usize, CpuWriteGpuReadError>
Pushes several elements into the buffer.
If the buffer is not big enough, only the first CpuWriteGpuReadBuffer::remaining_capacity
elements are pushed before returning an error.
Otherwise, returns the number of elements pushed for convenience.
sourcepub fn add_n(
&mut self,
element: T,
num_elements: usize
) -> Result<(), CpuWriteGpuReadError>
pub fn add_n( &mut self, element: T, num_elements: usize ) -> Result<(), CpuWriteGpuReadError>
Fills the buffer with n instances of an element.
If the buffer is not big enough, only the first self.remaining_capacity()
elements are pushed before returning an error.
sourcepub fn push(&mut self, element: T) -> Result<(), CpuWriteGpuReadError>
pub fn push(&mut self, element: T) -> Result<(), CpuWriteGpuReadError>
Pushes a single element into the buffer and advances the write pointer.
Returns an error if the data no longer fits into the buffer.
sourcepub fn num_written(&self) -> usize
pub fn num_written(&self) -> usize
The number of elements pushed into the buffer so far.
sourcepub fn remaining_capacity(&self) -> usize
pub fn remaining_capacity(&self) -> usize
The number of elements that can still be pushed into the buffer.
sourcepub fn copy_to_texture2d_entire_first_layer(
self,
encoder: &mut CommandEncoder,
destination: &Arc<DynamicResource<GpuTextureHandle, TextureDesc, GpuTextureInternal>>
) -> Result<(), CpuWriteGpuReadError>
pub fn copy_to_texture2d_entire_first_layer( self, encoder: &mut CommandEncoder, destination: &Arc<DynamicResource<GpuTextureHandle, TextureDesc, GpuTextureInternal>> ) -> Result<(), CpuWriteGpuReadError>
Copies all so far written data to the first layer of a 2D texture.
Assumes that the buffer consists of as-tightly-packed-as-possible rows of data.
(taking into account required padding as specified by [wgpu::COPY_BYTES_PER_ROW_ALIGNMENT
])
Fails if the buffer size is not sufficient to fill the entire texture.
sourcepub fn copy_to_texture2d(
self,
encoder: &mut CommandEncoder,
destination: ImageCopyTexture<'_>,
copy_size: Extent3d
) -> Result<(), CpuWriteGpuReadError>
pub fn copy_to_texture2d( self, encoder: &mut CommandEncoder, destination: ImageCopyTexture<'_>, copy_size: Extent3d ) -> Result<(), CpuWriteGpuReadError>
Copies all so far written data to a rectangle on a single 2D texture layer.
Assumes that the buffer consists of as-tightly-packed-as-possible rows of data.
(taking into account required padding as specified by [wgpu::COPY_BYTES_PER_ROW_ALIGNMENT
])
Implementation note: Does 2D-only entirely for convenience as it greatly simplifies the input parameters.
sourcepub fn copy_to_buffer(
self,
encoder: &mut CommandEncoder,
destination: &Arc<DynamicResource<GpuBufferHandle, BufferDesc, Buffer>>,
destination_offset: BufferAddress
) -> Result<(), CpuWriteGpuReadError>
pub fn copy_to_buffer( self, encoder: &mut CommandEncoder, destination: &Arc<DynamicResource<GpuBufferHandle, BufferDesc, Buffer>>, destination_offset: BufferAddress ) -> Result<(), CpuWriteGpuReadError>
Copies the entire buffer to another buffer and drops it.
Auto Trait Implementations§
impl<T> Freeze for CpuWriteGpuReadBuffer<T>
impl<T> !RefUnwindSafe for CpuWriteGpuReadBuffer<T>
impl<T> Send for CpuWriteGpuReadBuffer<T>
impl<T> Sync for CpuWriteGpuReadBuffer<T>
impl<T> Unpin for CpuWriteGpuReadBuffer<T>where
T: Unpin,
impl<T> !UnwindSafe for CpuWriteGpuReadBuffer<T>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more