Struct re_renderer::view_builder::ViewBuilder
source · pub struct ViewBuilder {
setup: ViewTargetSetup,
queued_draws: Vec<QueueableDrawData>,
outline_mask_processor: Option<OutlineMaskProcessor>,
screenshot_processor: Option<ScreenshotProcessor>,
picking_processor: Option<PickingLayerProcessor>,
}
Expand description
The highest level rendering block in re_renderer
.
Used to build up/collect various resources and then send them off for rendering of a single view.
Fields§
§setup: ViewTargetSetup
§queued_draws: Vec<QueueableDrawData>
§outline_mask_processor: Option<OutlineMaskProcessor>
§screenshot_processor: Option<ScreenshotProcessor>
§picking_processor: Option<PickingLayerProcessor>
Implementations§
source§impl ViewBuilder
impl ViewBuilder
sourcepub const MAIN_TARGET_COLOR_FORMAT: TextureFormat = wgpu::TextureFormat::Rgba8UnormSrgb
pub const MAIN_TARGET_COLOR_FORMAT: TextureFormat = wgpu::TextureFormat::Rgba8UnormSrgb
Color format used for the main target of the view builder.
Eventually we’ll want to make this an HDR format and apply tonemapping during composite. However, note that it is easy to run into subtle MSAA quality issues then: Applying MSAA resolve before tonemapping is problematic as it means we’re doing msaa in linear. This is especially problematic at bright/dark edges where we may loose “smoothness”! For a nice illustration see this blog post by MRP We either would need to keep the MSAA target and tonemap it, or apply a manual resolve where we inverse-tonemap non-fully-covered pixel before averaging. (an optimized variant of this is described by AMD here) In any case, this gets us onto a potentially much costlier rendering path, especially for tiling GPUs.
sourcepub const MAIN_TARGET_ALPHA_TO_COVERAGE_COLOR_STATE: ColorTargetState = _
pub const MAIN_TARGET_ALPHA_TO_COVERAGE_COLOR_STATE: ColorTargetState = _
Use this color state when targeting the main target with alpha-to-coverage.
If blending with the background is enabled, we need alpha to indicate how much we overwrite the background.
(i.e. when we do blending of the screen target with whatever was there during Self::composite
.)
However, when using alpha-to-coverage, we need alpha to also indicate the coverage of the pixel from
which the samples are derived. What we’d like to happen is:
- use alpha to indicate coverage == number of samples written to
- write alpha==1.0 for each active sample despite what we set earlier
This way, we’d get the correct alpha and end up with pre-multipltiplied color values during MSAA resolve,
just like with opaque geometry!
OpenGL exposes this as
GL_SAMPLE_ALPHA_TO_ONE
, Vulkan asalphaToOne
. Unfortunately though, WebGPU does not support this! Instead, what happens is that alpha has a double meaning: Coverage and alpha of all written samples. This means that anti-aliased edges (== alpha < 1.0) will always creates “holes” into the target texture even if there was already an opaque object prior. To work around this, we accumulate alpha values with an additive blending operation, so that previous opaque objects won’t be overwritten with alpha < 1.0. (this is obviously wrong for a variety of reasons, but it looks good enough) Another problem with this is that during MSAA resolve we now average those too low alpha values. This makes us end up with a premultiplied alpha value that looks like it has additive blending applied since the resulting alpha value is not what was used to determine the color! -> See workaround incomposite.wgsl
Ultimately, we have the following options to fix this properly sorted from most desirable to least:
- don’t use alpha-to-coverage, use instead
SampleMask
- this is not supported on WebGL which either needs a special path, or more likely, has to just disable anti-aliasing in these cases
- as long as we use 4x MSAA, we have a pretty good idea where the samples are (see
jumpflooding_init_msaa.wgsl
), so we can actually use this to improve the quality of the anti-aliasing a lot by turning on/off the samples that are actually covered.
- figure out a way to never needing to blend with the background in
Self::composite
. - figure out how to use
GL_SAMPLE_ALPHA_TO_ONE
after all. This involves bringing this up with the WebGPU spec team and won’t work on WebGL.
sourcepub const SCREENSHOT_COLOR_FORMAT: TextureFormat = wgpu::TextureFormat::Rgba8Unorm
pub const SCREENSHOT_COLOR_FORMAT: TextureFormat = wgpu::TextureFormat::Rgba8Unorm
The texture format used for screenshots.
sourcepub const MAIN_TARGET_DEPTH_FORMAT: TextureFormat = wgpu::TextureFormat::Depth32Float
pub const MAIN_TARGET_DEPTH_FORMAT: TextureFormat = wgpu::TextureFormat::Depth32Float
Depth format used for the main target of the view builder.
[wgpu::TextureFormat::Depth24Plus
] would be preferable for performance, see Nvidia’s Vulkan dos and don’ts.
However, the problem with being “24Plus” is that we no longer know what format we’ll actually get, which is a problem e.g. for vertex shader determined depth offsets.
(This is a real concern - for example on Metal we always get a floating point target with this!)
[wgpu::TextureFormat::Depth32Float
] on the other hand is widely supported and has the best possible precision (with reverse infinite z projection which we’re already using).
sourcepub const MAIN_TARGET_SAMPLE_COUNT: u32 = 4u32
pub const MAIN_TARGET_SAMPLE_COUNT: u32 = 4u32
Enable MSAA always. This makes our pipeline less variable as well, as we need MSAA resolve steps if we want any MSAA at all!
4 samples are the only thing WebGPU
supports, and currently wgpu as well
(tracking issue for more options on native)
sourcepub const MAIN_TARGET_DEFAULT_MSAA_STATE: MultisampleState = _
pub const MAIN_TARGET_DEFAULT_MSAA_STATE: MultisampleState = _
Default multisample state that any [wgpu::RenderPipeline
] drawing to the main target needs to use.
In rare cases, pipelines may want to enable alpha to coverage and/or sample masks.
sourcepub const DEFAULT_DEPTH_CLEAR: LoadOp<f32> = _
pub const DEFAULT_DEPTH_CLEAR: LoadOp<f32> = _
Default value for clearing depth buffer to infinity.
0.0 == far since we’re using reverse-z.
sourcepub const MAIN_TARGET_DEFAULT_DEPTH_STATE: Option<DepthStencilState> = _
pub const MAIN_TARGET_DEFAULT_DEPTH_STATE: Option<DepthStencilState> = _
Default depth state for enabled depth write & read.
pub fn new(ctx: &RenderContext, config: TargetConfiguration) -> Self
sourcepub fn resolution_in_pixel(&self) -> [u32; 2]
pub fn resolution_in_pixel(&self) -> [u32; 2]
Resolution in pixels as configured on view builder creation.
fn draw_phase( &self, renderers: &Renderers, render_pipelines: &StaticResourcePoolReadLockAccessor<'_, GpuRenderPipelineHandle, RenderPipeline>, phase: DrawPhase, pass: &mut RenderPass<'_> )
pub fn queue_draw( &mut self, draw_data: impl Into<QueueableDrawData> ) -> &mut Self
sourcepub fn draw(
&self,
ctx: &RenderContext,
clear_color: Rgba
) -> Result<CommandBuffer, PoolError>
pub fn draw( &self, ctx: &RenderContext, clear_color: Rgba ) -> Result<CommandBuffer, PoolError>
Draws the frame as instructed to a temporary HDR target.
sourcepub fn schedule_screenshot<T: 'static + Send + Sync>(
&mut self,
ctx: &RenderContext,
identifier: GpuReadbackIdentifier,
user_data: T
) -> Result<(), ViewBuilderError>
pub fn schedule_screenshot<T: 'static + Send + Sync>( &mut self, ctx: &RenderContext, identifier: GpuReadbackIdentifier, user_data: T ) -> Result<(), ViewBuilderError>
Schedules the taking of a screenshot.
Needs to be called before ViewBuilder::draw
.
Can only be called once per frame per ViewBuilder
.
Data from the screenshot needs to be retrieved via crate::ScreenshotProcessor::next_readback_result
.
To do so, you need to pass the exact same identifier
and type of user data as you’ve done here:
use re_renderer::{view_builder::ViewBuilder, RenderContext, ScreenshotProcessor};
fn take_screenshot(ctx: &RenderContext, view_builder: &mut ViewBuilder) {
view_builder.schedule_screenshot(&ctx, 42, "My screenshot".to_owned());
}
fn receive_screenshots(ctx: &RenderContext) {
while ScreenshotProcessor::next_readback_result::<String>(ctx, 42, |data, extent, user_data| {
re_log::info!("Received screenshot {}", user_data);
},
).is_some()
{}
}
Received data that isn’t retrieved for more than a frame will be automatically discarded.
sourcepub fn schedule_picking_rect<T: 'static + Send + Sync>(
&mut self,
ctx: &RenderContext,
picking_rect: RectInt,
readback_identifier: GpuReadbackIdentifier,
readback_user_data: T,
show_debug_view: bool
) -> Result<(), ViewBuilderError>
pub fn schedule_picking_rect<T: 'static + Send + Sync>( &mut self, ctx: &RenderContext, picking_rect: RectInt, readback_identifier: GpuReadbackIdentifier, readback_user_data: T, show_debug_view: bool ) -> Result<(), ViewBuilderError>
Schedules the readback of a rectangle from the picking layer.
Needs to be called before ViewBuilder::draw
.
Can only be called once per frame per ViewBuilder
.
The result will still be valid if the rectangle is partially or fully outside of bounds. Areas that are not overlapping with the primary target will be filled as-if the view’s target was bigger, i.e. all values are valid picking IDs, it is up to the user to discard anything that is out of bounds.
Note that the picking layer will not be created in the first place if this isn’t called.
Data from the picking rect needs to be retrieved via crate::PickingLayerProcessor::next_readback_result
.
To do so, you need to pass the exact same identifier
and type of user data as you’ve done here:
use re_renderer::{view_builder::ViewBuilder, RectInt, PickingLayerProcessor, RenderContext};
fn schedule_picking_readback(
ctx: &RenderContext,
view_builder: &mut ViewBuilder,
picking_rect: RectInt,
) {
view_builder.schedule_picking_rect(
ctx, picking_rect, 42, "My screenshot".to_owned(), false,
);
}
fn receive_screenshots(ctx: &RenderContext) {
while let Some(result) = PickingLayerProcessor::next_readback_result::<String>(ctx, 42) {
re_log::info!("Received picking_data {}", result.user_data);
}
}
Received data that isn’t retrieved for more than a frame will be automatically discarded.
sourcepub fn composite(&self, ctx: &RenderContext, pass: &mut RenderPass<'_>)
pub fn composite(&self, ctx: &RenderContext, pass: &mut RenderPass<'_>)
Composites the final result of a ViewBuilder
to a given output RenderPass
.
The bound surface(s) on the RenderPass
are expected to be the same format as specified on Context
creation.
screen_position
specifies where on the output pass the view is placed.
Auto Trait Implementations§
impl !Freeze for ViewBuilder
impl !RefUnwindSafe for ViewBuilder
impl Send for ViewBuilder
impl Sync for ViewBuilder
impl Unpin for ViewBuilder
impl !UnwindSafe for ViewBuilder
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more