Experimental
rerun.experimental
Experimental features for Rerun.
These features are not yet stable and may change in future releases without going through the normal deprecation cycle.
Lens = DeriveLens | MutateLens
module-attribute
Union of all lens types.
class Chunk
A single chunk of data from a recording.
entity_path
property
The entity path this chunk belongs to.
id
property
The unique ID of this chunk.
is_empty
property
Whether the chunk has zero rows.
is_static
property
Whether the chunk contains only static data (no timelines).
num_columns
property
The number of columns in this chunk.
num_rows
property
The number of rows in this chunk.
timeline_names
property
The names of all timelines in this chunk.
def apply_lenses(lenses)
Apply one or more lenses to this chunk, returning transformed chunks.
Each lens matches by input component. Columns not consumed by any matching lens are forwarded unchanged as a separate chunk.
If no lens matches the chunk (including when an empty list of lenses is passed), the original chunk is returned unchanged.
| PARAMETER | DESCRIPTION |
|---|---|
lenses
|
One or more |
| RETURNS | DESCRIPTION |
|---|---|
A list of [`Chunk`][] objects.
|
|
def apply_selector(source, selector)
Apply a selector to a single component, returning a new chunk with the component transformed.
All other columns (timelines, other components) are preserved unchanged. The source component's existing descriptor is preserved.
For better performance, prefer MutateLens
with apply_lenses
which processes multiple transformations in a single pass.
| PARAMETER | DESCRIPTION |
|---|---|
source
|
A
TYPE:
|
selector
|
A |
| RETURNS | DESCRIPTION |
|---|---|
A new [`Chunk`][rerun.experimental.Chunk] with the component transformed.
|
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the source component is not found in the chunk or the selector fails to evaluate. |
def format(*, width=240, redact=False)
def from_columns(entity_path, indexes, columns)
classmethod
Create a Chunk from columns, mirroring the rerun.send_columns API.
A fresh chunk ID and sequential row IDs are auto-generated.
| PARAMETER | DESCRIPTION |
|---|---|
entity_path
|
The entity path for this chunk (e.g., "/camera/image").
TYPE:
|
indexes
|
The time columns for this chunk. Each
TYPE:
|
columns
|
The component columns for this chunk. Each
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If timeline and component column lengths don't match. |
Example
chunk = Chunk.from_columns(
"/robots/arm",
indexes=[rr.TimeColumn("frame", sequence=[0, 1, 2])],
columns=rr.Points3D.columns(positions=[[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
)
def from_record_batch(record_batch)
classmethod
Create a Chunk from a PyArrow RecordBatch with Rerun schema metadata.
The RecordBatch must have Rerun metadata in its schema, as produced by
to_record_batch. This enables round-tripping through PyArrow
transforms. The original chunk ID and row IDs are preserved.
| PARAMETER | DESCRIPTION |
|---|---|
record_batch
|
A PyArrow RecordBatch with Rerun schema metadata.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the RecordBatch lacks required Rerun schema metadata. |
def to_record_batch()
Convert this chunk to an Arrow RecordBatch.
def with_entity_path(entity_path)
Return a copy of this chunk with a new entity path.
A fresh chunk ID is generated to avoid aliasing the original chunk in downstream caches and indices. Row IDs, timelines, and components are preserved as-is.
| PARAMETER | DESCRIPTION |
|---|---|
entity_path
|
The new entity path for the returned chunk (e.g.
TYPE:
|
class ChunkStore
A fully-materialized, in-memory chunk store.
Build one from chunks via
ChunkStore.from_chunks, or
fully materialize an IndexedReader
via reader.stream().collect().
For lazy, on-demand chunk loading, see LazyStore.
Use stream() to process chunks through the lazy pipeline, or
write_rrd() to persist to disk.
def __len__()
Return the number of chunks in this store.
def from_chunks(chunks)
staticmethod
Build a ChunkStore from a sequence of chunks.
def schema()
The schema describing all columns in this store.
def stream()
Return a lazy stream over all chunks in this store.
def summary()
Compact, deterministic summary of every chunk in the store.
Each line describes one chunk:
{entity_path} rows={n} static={True|False} timelines=[…] cols=[…]
Useful for snapshot testing.
def write_rrd(path, *, application_id, recording_id)
Write all chunks to an RRD file.
The caller must provide application_id and recording_id explicitly.
class ColumnRule
dataclass
Rule for combining columns with matching suffixes into a Rerun component.
Use the factory methods to create rules:
translation3d()— 3 columns →Translation3Drotation_quat()— 4 columns →RotationQuatrotation_axis_angle()— 4 columns →RotationAxisAnglescale3d()— 3 columns →Scale3Dscalars()— N columns →Scalarswith named seriestransform()— 3 + 4 columns →Transform3D(translation + rotation)
def rotation_axis_angle(suffixes, *, field_name_override=None)
classmethod
Create a rule that combines 4 columns into a RotationAxisAngle component (3 axis + 1 angle).
def rotation_quat(suffixes, *, field_name_override=None)
classmethod
Create a rule that combines 4 columns into a RotationQuat component.
def scalars(suffixes, *, names, field_name_override=None)
classmethod
Create a rule that combines N columns into a Scalars component with named series.
def scale3d(suffixes, *, field_name_override=None)
classmethod
Create a rule that combines 3 columns into a Scale3D component.
def transform(translation_suffixes, rotation_suffixes, *, field_name_override=None)
classmethod
Create a rule that combines 3 translation + 4 rotation columns into a Transform3D.
Both suffix sets must match with the same sub-prefix for columns to be
combined. In struct mode, produces a nested struct with translation
and quaternion fields. In flat mode, emits both components at the
same entity path.
def translation3d(suffixes, *, field_name_override=None)
classmethod
Create a rule that combines 3 columns into a Translation3D component.
class DeriveLens
A derive lens that creates new component/time columns from an input component.
Derive lenses extract fields from a component and produce new columns, optionally at a different entity and/or with new time columns.
Pass scatter=True to enable 1:N row mapping (exploding lists).
Example usage::
lens = (
DeriveLens("Imu:accel")
.to_component(rr.Scalars.descriptor_scalars(), Selector(".x"))
)
To write to an explicit target entity::
lens = (
DeriveLens("Imu:accel", output_entity="/out/x")
.to_component(rr.Scalars.descriptor_scalars(), Selector(".x"))
)
def __init__(input_component, *, output_entity=None, scatter=False)
Create a new derive lens.
| PARAMETER | DESCRIPTION |
|---|---|
input_component
|
The component identifier to match (e.g.
TYPE:
|
output_entity
|
Optional target entity path. When set, output is written to this entity instead of the input entity.
TYPE:
|
scatter
|
When
TYPE:
|
def to_component(component, selector)
Add a component output column.
| PARAMETER | DESCRIPTION |
|---|---|
component
|
A
TYPE:
|
selector
|
A |
| RETURNS | DESCRIPTION |
|---|---|
A new [`DeriveLens`][rerun.experimental.DeriveLens] with the component added.
|
|
def to_timeline(timeline_name, timeline_type, selector)
Add a time extraction column.
| PARAMETER | DESCRIPTION |
|---|---|
timeline_name
|
Name of the timeline to create.
TYPE:
|
timeline_type
|
Type of the timeline:
TYPE:
|
selector
|
A |
| RETURNS | DESCRIPTION |
|---|---|
A new [`DeriveLens`][rerun.experimental.DeriveLens] with the time column added.
|
|
class IndexedReader
Bases: StreamingReader, Protocol
Protocol for readers backed by an index/manifest.
Extends StreamingReader: every IndexedReader also supports
stream() -> LazyChunkStream for pure-streaming processing.
Indexed readers expose a LazyStore view
over the source via store() — the manifest is read up-front; chunks load
on demand. To fully materialize into a
ChunkStore, call stream().collect().
def store()
Return a LazyStore view of this source.
def stream()
Return a lazy stream over all chunks from this source.
class LazyChunkStream
A lazy, composable pipeline over chunks.
Builder methods (filter, drop, split, map, flat_map, lenses, merge)
consume the input stream(s) and return new stream(s). A consumed stream cannot be
used again; attempting to do so raises a ValueError. This prevents accidental reuse
that would result in duplicate use of the same stream in a pipeline.
Terminal methods (to_chunks, __iter__, collect, write_rrd) do not consume
the stream — they run the pipeline and leave the stream usable. Each call creates a
fresh execution.
def __iter__()
Iterate over chunks one at a time (triggers execution).
def collect(*, optimize=None)
Run the pipeline and materialize all chunks into a ChunkStore.
By default, only the single-pass compaction that happens naturally
during chunk insertion is applied. Pass optimize=OptimizationProfile.LIVE
or optimize=OptimizationProfile.OBJECT_STORE to run additional
optimization (extra convergence passes, video GoP rebatching) tuned for
the chosen target.
| PARAMETER | DESCRIPTION |
|---|---|
optimize
|
If Otherwise, apply the given profile after insertion.
TYPE:
|
Examples:
Run with the object-store-tuned profile:
store = reader.stream().collect(optimize=OptimizationProfile.OBJECT_STORE)
def drop(*, content=None, has_timeline=None, is_static=None, components=None)
Drop the matching portion of each chunk; keep the rest. Consumes this stream.
Complement of filter(): what filter() would keep is
discarded, what it would discard is kept.
| PARAMETER | DESCRIPTION |
|---|---|
content
|
Entity path filter. Accepts a single expression, a list of expressions,
or a
TYPE:
|
has_timeline
|
Only drop chunks that have a column for this timeline.
TYPE:
|
is_static
|
If
TYPE:
|
components
|
Drop the listed component columns. Accepts
TYPE:
|
def filter(*, content=None, has_timeline=None, is_static=None, components=None)
Keep the matching portion of each chunk; drop the rest. Consumes this stream.
All criteria are combined with AND. For chunk-level predicates (content,
has_timeline, is_static) the chunk either passes or is dropped
entirely. For components, the chunk is split by component columns:
only matching component columns are kept (timelines and entity
path are preserved). When a list is given, any column matching
any of the listed components is kept (OR semantics). Chunks that
contain none of the listed components are dropped entirely.
If a chunk fails any predicate, it is dropped entirely -- no component splitting occurs.
| PARAMETER | DESCRIPTION |
|---|---|
content
|
Entity path filter. Accepts a single expression, a list of expressions,
or a
TYPE:
|
has_timeline
|
Only keep chunks that have a column for this timeline.
TYPE:
|
is_static
|
If
TYPE:
|
components
|
Keep only the listed component columns. Accepts
TYPE:
|
def flat_map(fn)
Apply a Python function to each chunk, producing zero or more output chunks. Consumes this stream.
Runs in Python (GIL-bound, sequential).
def from_iter(chunks)
staticmethod
Wrap a Python iterable of Chunks into a LazyChunkStream.
Enables user-defined sources and the generator escape hatch.
def lenses(lenses, *, output_mode='drop_unmatched', content=None)
Apply lenses to transform chunk data. Consumes this stream.
Each lens matches chunks by entity path and input component, then transforms the data according to its output specifications.
| PARAMETER | DESCRIPTION |
|---|---|
lenses
|
One or more |
output_mode
|
How to handle unmatched chunks:
TYPE:
|
content
|
Optional entity path filter. When set, lenses are applied only to chunks
whose entity path matches; non-matching chunks pass through unchanged
regardless of
TYPE:
|
def map(fn)
Apply a Python function to each chunk, producing exactly one output chunk. Consumes this stream.
Runs in Python (GIL-bound, sequential). For transforms that may produce
zero or many chunks, use flat_map instead.
def merge(*streams)
staticmethod
Merge chunks from multiple streams into one. Consumes all input streams.
All inputs execute concurrently. Chunks are yielded as they become available. Within each input, chunk order is preserved. Across inputs, ordering is non-deterministic.
def split(*, content=None, has_timeline=None, is_static=None, components=None)
Split into (matching, non_matching). Consumes this stream.
Equivalent to (stream.filter(…), stream.drop(…)), but the
upstream executes only once. merge(matching, non_matching)
reconstructs the original stream in a semantically lossless way
(component-wise chunk splitting is not undone).
Both branches share the same upstream -- it executes once. Both branches MUST be consumed for the pipeline to complete (dropping an unconsumed branch is fine and unblocks the other).
| PARAMETER | DESCRIPTION |
|---|---|
content
|
Entity path filter. Accepts a single expression, a list of expressions,
or a
TYPE:
|
has_timeline
|
Only match chunks that have a column for this timeline.
TYPE:
|
is_static
|
If
TYPE:
|
components
|
Match the listed component columns. Accepts
TYPE:
|
def to_chunks()
Run the pipeline and return all chunks as a list.
def write_rrd(path, *, application_id, recording_id)
Run the pipeline and write all chunks to an RRD file.
The caller must provide application_id and recording_id explicitly.
class LazyStore
Index-based, on-demand chunk store.
The manifest is held in memory (so schema(), summary(), and __len__
work without loading any chunks), but chunk data is loaded only when
requested.
Example: lazy = RrdReader("recording.rrd").store()
Use stream() to process chunks through the lazy pipeline, or write_rrd()
to persist to disk. To fully materialize into a
ChunkStore, call lazy.stream().collect().
def __len__()
Return the number of chunks described by the manifest.
def schema()
The schema describing all columns in this store, derived from the manifest.
def stream()
Return a lazy stream over all chunks in this store.
def summary()
Compact, deterministic summary of every chunk in the store.
Built from the manifest; no chunk data is loaded. Each line describes one chunk:
{entity_path} rows={n} static={True|False} timelines=[…] cols=[…]
Useful for snapshot testing.
def write_rrd(path, *, application_id, recording_id)
Write all chunks to an RRD file.
The caller must provide application_id and recording_id explicitly.
class McapReader
Read chunks from an MCAP file.
path
property
The file path of the MCAP file.
def __init__(path, *, timeline_type='timestamp', timestamp_offset_ns=None, decoders=None, include_topic_regex=None, exclude_topic_regex=None)
Construct a new MCAP reader.
| PARAMETER | DESCRIPTION |
|---|---|
path
|
Path to the |
timeline_type
|
Whether to interpret the MCAP
TYPE:
|
timestamp_offset_ns
|
Optional offset in nanoseconds to add to all
TYPE:
|
decoders
|
Optional list of MCAP decoder identifiers to enable. If omitted, all
available decoders are enabled. Use
|
include_topic_regex
|
Optional list of regex patterns. If provided, only topics matching at least one pattern are decoded. Patterns use RE2 syntax and are not implicitly anchored. |
exclude_topic_regex
|
Optional list of regex patterns. Topics matching any pattern are
skipped. Applied after includes. Same syntax as |
def available_decoders()
staticmethod
Return the list of all supported MCAP decoder identifiers.
def stream()
Return a lazy stream over all chunks in the MCAP file.
class MutateLens
A mutate lens that modifies the input component in-place.
Mutate lenses apply a selector transformation to the input component,
replacing it in the chunk. By default, new row IDs are generated.
Pass keep_row_ids=True to preserve original row IDs.
Example usage::
lens = MutateLens("Imu:accel", Selector(".x"))
def __init__(input_component, selector, *, keep_row_ids=False)
class OptimizationProfile
dataclass
Named optimization profile passed to LazyChunkStream.collect(optimize=...).
Two presets:
OptimizationProfile.LIVE: small chunks tuned for the live Viewer workflow.OptimizationProfile.OBJECT_STORE: large chunks tuned for object-store-backed query and streaming (e.g. a catalog server).
The presets are fully concrete: every field has a value. Custom profiles
built by calling OptimizationProfile(...) directly may pass None on the
threshold fields to fall back to the SDK's internal default
(OptimizationProfile.LIVE's thresholds).
LIVE
class-attribute
Optimized for the live Viewer workflow: small chunks for low-latency rendering and fine-grained time-panel precision.
OBJECT_STORE
class-attribute
Optimized for object-store-backed storage (e.g. a catalog server): larger chunks tuned for query throughput and streaming over the network.
extra_passes = 50
class-attribute
instance-attribute
Number of extra convergence passes run after the initial insert.
gop_batching = True
class-attribute
instance-attribute
If True (default), video stream chunks are rebatched to align with GoP
(keyframe) boundaries after normal compaction.
GoP rebatching never splits a GoP across chunks, so streams with long
keyframe intervals can produce chunks much larger than max_bytes.
max_bytes = None
class-attribute
instance-attribute
Chunk size threshold in bytes. None means use LIVE's default.
max_rows = None
class-attribute
instance-attribute
Maximum rows per sorted chunk. None means use LIVE's default.
max_rows_if_unsorted = None
class-attribute
instance-attribute
Maximum rows per unsorted chunk. None means use LIVE's default.
split_size_ratio = None
class-attribute
instance-attribute
If set, split chunks so no two archetype groups sharing a chunk differ in
byte size by more than this factor. Values should be >= 1; at 1.0,
every archetype is forced into its own chunk.
This keeps large columns (images, videos, blobs) out of the same chunk as small columns (scalars, transforms, text), so the viewer can fetch just the small columns without dragging along the large payload. Components belonging to the same archetype are always kept together.
A good starting value is 10.0. If None (default), no splitting is
performed.
class ParquetReader
Read chunks from a Parquet file.
path
property
The file path of the Parquet file.
def __init__(path, *, entity_path_prefix=None, column_grouping='prefix', delimiter='_', prefixes=None, use_structs=True, static_columns=None, index_columns=None, column_rules=None)
Load a parquet file with configurable column grouping and column rules.
| PARAMETER | DESCRIPTION |
|---|---|
path
|
Path to the |
entity_path_prefix
|
Optional prefix for all entity paths (e.g.
TYPE:
|
column_grouping
|
How to group columns into chunks.
TYPE:
|
delimiter
|
Character used to split column names when
TYPE:
|
prefixes
|
Explicit prefix strings for grouping columns. Required when
|
use_structs
|
When
TYPE:
|
static_columns
|
Column names whose values are constant across all rows. These are emitted once as timeless/static data. An error is raised if a listed column contains varying values. |
index_columns
|
List of columns to use as timeline indices. Each entry is a tuple:
The
The When omitted, a synthetic
TYPE:
|
column_rules
|
Rules for combining columns with matching suffixes into typed
Rerun components. Each rule is a Example::
TYPE:
|
def stream()
Return a lazy stream over all chunks in the Parquet file.
class RrdReader
Read chunks from an RRD file.
Use recordings() or blueprints() to discover what stores exist in the file,
then stream() or store() to access a specific one. When no store is
specified, the first recording store is used.
path
property
The file path of the RRD file.
def blueprints()
List the blueprint entries in this RRD file.
def recordings()
List the recording entries in this RRD file.
def store(*, store=None)
Open a specific store as a LazyStore.
Reads the manifest immediately; chunk data is loaded on demand.
Legacy RRDs without a footer/manifest are not supported here — use
RrdReader(...).stream().collect() for those.
| PARAMETER | DESCRIPTION |
|---|---|
store
|
Which store to load. If
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the specified store is not in this RRD file, or |
def stream(*, store=None)
Return a lazy stream over chunks from a store.
| PARAMETER | DESCRIPTION |
|---|---|
store
|
Which store to stream. If
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the specified store is not in this RRD file, or |
class Selector
A jq-like query selector for Arrow arrays.
Selectors provide a path-based query language (inspired by jq) that operates on Arrow arrays in a columnar fashion.
Syntax overview:
.field— access a named field in a struct[]— iterate over every element of a list[N]— index into a list by position?— error suppression / optional operator!— assert non-null|— pipe the output of one expression to another
Example usage::
selector = Selector(".location")
result = selector.execute(my_struct_array)
Selectors can also be piped into Python functions::
selector = Selector(".values").pipe(lambda arr: pa.compute.multiply(arr, 2))
result = selector.execute(my_struct_array)
def __init__(query)
Parse a selector from a query string.
| PARAMETER | DESCRIPTION |
|---|---|
query
|
The selector query string (e.g. ".field", ".foo | .bar").
TYPE:
|
def execute(source)
Execute this selector against a pyarrow array.
| PARAMETER | DESCRIPTION |
|---|---|
source
|
The input Arrow array to query.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
The result array, or None if the selector's error was suppressed.
|
|
def execute_per_row(source)
Execute this selector against each row of a pyarrow list array.
The output is guaranteed to have the same number of rows as the input.
| PARAMETER | DESCRIPTION |
|---|---|
source
|
The input Arrow list array to query.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
The result list array, or None if the selector's error was suppressed.
|
|
def pipe(func)
Pipe the output of this selector through a transformation function or another selector.
Returns a new selector; the original is not modified.
| PARAMETER | DESCRIPTION |
|---|---|
func
|
A callable that accepts a |
| RETURNS | DESCRIPTION |
|---|---|
A new [`Selector`][rerun.experimental.Selector] with the transformation applied.
|
|
class StoreEntry
Describes a store found in an RRD file.
application_id
property
The application ID of the store.
kind
property
Store kind: "recording" or "blueprint".
recording_id
property
The recording ID of the store.
class StreamingReader
Bases: Protocol
Protocol for readers that produce a sequential stream of chunks.
All readers provide stream() -> LazyChunkStream. Readers for indexable
formats will additionally satisfy
IndexedReader, which adds
store() -> LazyStore and load() -> ChunkStore.
def stream()
Return a lazy stream over all chunks from this source.
class ViewerClient
A connection to an instance of a Rerun viewer.
Warning
This API is experimental and may change or be removed in future versions.
def __init__(addr='127.0.0.1:9876')
Create a new viewer client connection.
| PARAMETER | DESCRIPTION |
|---|---|
addr
|
The address of the viewer to connect to, in the format "host:port". Defaults to "127.0.0.1:9876" for a local viewer.
TYPE:
|
def save_screenshot(file_path, view_id=None)
Save a screenshot to a file.
Warning
This API is experimental and may change or be removed in future versions.
| PARAMETER | DESCRIPTION |
|---|---|
file_path
|
The path where the screenshot will be saved. Important This path is relative to the viewer's filesystem, not the client's. If your viewer runs on a different machine, the screenshot will be saved there.
TYPE:
|
view_id
|
Optional view ID to screenshot. If None, screenshots the entire viewer. |
def send_table(name, table)
Send a table to the viewer.
A table is represented as a dataframe defined by an Arrow record batch.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The table name. Note The table name serves as an identifier. If you send a table with the same name twice, the second table will replace the first one.
TYPE:
|
table
|
The Arrow RecordBatch containing the table data to send.
TYPE:
|
def send_chunks(chunks, *, recording=None)
Send chunks to a recording stream. Blocks until every chunk has been queued.
Note
For a LazyChunkStream and LazyStore inputs, this call triggers execution
and/or loading and will block for the duration of this process.
| PARAMETER | DESCRIPTION |
|---|---|
chunks
|
One of:
Source store identity (
TYPE:
|
recording
|
Recording stream to send into. Defaults to the current active recording.
TYPE:
|