robometric_frame.trajectory_quality.relative_trajectory_error

Relative Trajectory Error (RTE) metric for robotics policy trajectory evaluation.

RTE measures the local accuracy between predicted and reference trajectories by comparing relative motion (displacement vectors) over a specified time window.

Reference:

J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Oct. 2012.

F. Endres, J. Hess, N. Engelhard, J. Sturm, D. Cremers, and W. Burgard, “An evaluation of the RGB-D SLAM system,” in 2012 IEEE International Conference on Robotics and Automation, IEEE, May 2012.

Classes

RelativeTrajectoryError([delta])

Compute Relative Trajectory Error (RTE) for robotics policy trajectory evaluation.

class robometric_frame.trajectory_quality.relative_trajectory_error.RelativeTrajectoryError(delta=1, **kwargs)[source]

Compute Relative Trajectory Error (RTE) for robotics policy trajectory evaluation.

RTE is calculated as:

RTE = (1/(L-Δ)) * Σ(i=1 to L-Δ) |(p_{i+Δ} - p_i) - (p_{i+Δ}* - p_i*)|_2

where p_i are predicted trajectory points, p_i* are reference (ground truth) trajectory points, L is the trajectory length, and Δ (delta) is the step size for computing relative motion.

RTE assesses local accuracy by comparing displacement vectors between the predicted and reference trajectories. Unlike ATE which measures global consistency, RTE focuses on the correctness of relative motion, making it particularly useful for evaluating drift and local tracking performance.

This metric accumulates errors across multiple trajectory pairs and returns the average RTE when compute() is called.

Parameters:
  • delta (int) – Step size for computing relative motion. Must be >= 1. Larger values assess consistency over longer time windows. Default: 1 (consecutive points).

  • **kwargs (Any) – Additional keyword arguments passed to the base Metric class.

Example

>>> from robometric_frame.trajectory_quality import RelativeTrajectoryError
>>> import torch
>>> metric = RelativeTrajectoryError(delta=1)
>>> # Perfect prediction (zero error)
>>> predicted = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]])
>>> reference = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]])
>>> metric.update(predicted, reference)
>>> metric.compute()
tensor(0.0000)
Example (with drift):
>>> # Prediction with constant drift in motion
>>> metric = RelativeTrajectoryError(delta=1)
>>> predicted = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.5]])
>>> reference = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]])
>>> metric.update(predicted, reference)
>>> result = metric.compute()
Example (larger delta):
>>> # Using delta=2 to check motion over 2-step windows
>>> metric = RelativeTrajectoryError(delta=2)
>>> predicted = torch.tensor([
...     [0.0, 0.0], [1.0, 0.0], [2.0, 0.0], [3.0, 0.0]
... ])
>>> reference = torch.tensor([
...     [0.0, 0.0], [1.0, 0.0], [2.0, 0.0], [3.0, 0.0]
... ])
>>> metric.update(predicted, reference)
>>> metric.compute()
tensor(0.0000)
Example (batched):
>>> # Batch of trajectory pairs - shape (B, L, D)
>>> metric = RelativeTrajectoryError(delta=1)
>>> predicted_batch = torch.tensor([
...     [[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]],
...     [[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]]
... ])
>>> reference_batch = torch.tensor([
...     [[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]],
...     [[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]]
... ])
>>> metric.update(predicted_batch, reference_batch)
>>> metric.compute()
tensor(0.0000)
Example (3D trajectories):
>>> # 3D trajectory comparison
>>> metric = RelativeTrajectoryError(delta=1)
>>> predicted = torch.tensor([
...     [0.0, 0.0, 0.0],
...     [1.0, 0.0, 0.0],
...     [1.0, 1.0, 0.0]
... ])
>>> reference = torch.tensor([
...     [0.0, 0.0, 0.0],
...     [1.0, 0.0, 0.0],
...     [1.0, 1.0, 1.0]
... ])
>>> metric.update(predicted, reference)
>>> result = metric.compute()
Example (distributed):
>>> # In distributed training, metrics are automatically synced
>>> metric = RelativeTrajectoryError(delta=1)
>>> # On GPU 0
>>> pred_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]])
>>> ref_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]])
>>> metric.update(pred_gpu0, ref_gpu0)
>>> # On GPU 1
>>> pred_gpu1 = torch.tensor([[0.0, 0.0], [1.0, 1.0], [2.0, 2.0]])
>>> ref_gpu1 = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]])
>>> metric.update(pred_gpu1, ref_gpu1)
>>> # Final result aggregates across all GPUs
>>> result = metric.compute()
full_state_update: bool = False
total_error: Tensor
num_trajectories: Tensor
__init__(delta=1, **kwargs)[source]

Initialize the RelativeTrajectoryError metric.

Parameters:
  • delta (int) – Step size for computing relative motion. Must be >= 1.

  • **kwargs (Any) – Additional keyword arguments passed to the base Metric class.

Raises:

ValueError – If delta is less than 1.

delta: int
update(predicted, reference)[source]

Update metric state with new predicted and reference trajectory pair(s).

Parameters:
  • predicted (Tensor) –

    Predicted trajectory tensor of shape (…, L, D) where: - … represents any number of batch dimensions (can be empty) - L is the number of points (must be > delta) - D is the spatial dimensionality (e.g., 2 for 2D, 3 for 3D)

    Examples of valid shapes: - (L, D): Single trajectory - (B, L, D): Batch of B trajectories - (B, T, L, D): Batch of B sequences with T slices each

    Points should be ordered chronologically along the L dimension.

  • reference (Tensor) – Reference (ground truth) trajectory tensor with the same shape as predicted.

Raises:

ValueError – If trajectories have invalid shape, mismatched shapes, or insufficient points.

Return type:

None

compute()[source]

Compute the average Relative Trajectory Error across all trajectory pairs.

Return type:

Tensor

Returns:

Average RTE as a scalar tensor. Lower values indicate better local tracking performance and less drift.

Raises:

RuntimeError – If no trajectories have been recorded.

training: bool