robometric_frame.trajectory_quality
Trajectory quality metrics for robotics policy evaluation.
This module provides metrics for evaluating the quality of robot trajectories, including path length, smoothness, curvature change, and trajectory errors.
- class robometric_frame.trajectory_quality.AbsoluteTrajectoryError(**kwargs)[source]
Compute Absolute Trajectory Error (ATE) for robotics policy trajectory evaluation.
- ATE is calculated as:
ATE = (1/L) * Σ(i=1 to L) |p_i - p_i*|_2
where p_i are predicted trajectory points, p_i* are reference (ground truth) trajectory points, and L is the trajectory length. ATE evaluates global consistency by measuring the average Euclidean distance between corresponding points in predicted and reference trajectories.
This metric is critical for navigation and manipulation tasks requiring precise positioning. Lower ATE values indicate better trajectory tracking performance.
This metric accumulates errors across multiple trajectory pairs and returns the average ATE when compute() is called.
- Parameters:
**kwargs (
Any) – Additional keyword arguments passed to the base Metric class.
Example
>>> from robometric_frame.trajectory_quality import AbsoluteTrajectoryError >>> import torch >>> metric = AbsoluteTrajectoryError() >>> # Perfect prediction (zero error) >>> predicted = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> reference = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> metric.update(predicted, reference) >>> metric.compute() tensor(0.0000)
- Example (with error):
>>> # Prediction with constant offset >>> metric = AbsoluteTrajectoryError() >>> predicted = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> reference = torch.tensor([[0.0, 1.0], [1.0, 1.0], [2.0, 1.0]]) >>> metric.update(predicted, reference) >>> metric.compute() tensor(1.0000)
- Example (batched):
>>> # Batch of trajectory pairs - shape (B, L, D) >>> metric = AbsoluteTrajectoryError() >>> predicted_batch = torch.tensor([ ... [[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]], ... [[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]] ... ]) >>> reference_batch = torch.tensor([ ... [[0.0, 0.5], [1.0, 0.5], [2.0, 0.5]], ... [[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]] ... ]) >>> metric.update(predicted_batch, reference_batch) >>> result = metric.compute()
- Example (3D trajectories):
>>> # 3D trajectory comparison >>> metric = AbsoluteTrajectoryError() >>> predicted = torch.tensor([ ... [0.0, 0.0, 0.0], ... [1.0, 0.0, 0.0], ... [1.0, 1.0, 0.0] ... ]) >>> reference = torch.tensor([ ... [0.0, 0.0, 0.0], ... [1.0, 0.0, 0.0], ... [1.0, 1.0, 1.0] ... ]) >>> metric.update(predicted, reference) >>> result = metric.compute()
- Example (distributed):
>>> # In distributed training, metrics are automatically synced >>> metric = AbsoluteTrajectoryError() >>> # On GPU 0 >>> pred_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0]]) >>> ref_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0]]) >>> metric.update(pred_gpu0, ref_gpu0) >>> # On GPU 1 >>> pred_gpu1 = torch.tensor([[0.0, 0.0], [1.0, 1.0]]) >>> ref_gpu1 = torch.tensor([[0.0, 0.0], [1.0, 0.0]]) >>> metric.update(pred_gpu1, ref_gpu1) >>> # Final result aggregates across all GPUs >>> result = metric.compute()
- update(predicted, reference)[source]
Update metric state with new predicted and reference trajectory pair(s).
- Parameters:
predicted (
Tensor) –Predicted trajectory tensor of shape (…, L, D) where: - … represents any number of batch dimensions (can be empty) - L is the number of points (must be >= 1) - D is the spatial dimensionality (e.g., 2 for 2D, 3 for 3D)
Examples of valid shapes: - (L, D): Single trajectory - (B, L, D): Batch of B trajectories - (B, T, L, D): Batch of B sequences with T slices each
Points should be ordered chronologically along the L dimension.
reference (
Tensor) – Reference (ground truth) trajectory tensor with the same shape as predicted.
- Raises:
ValueError – If trajectories have invalid shape, mismatched shapes, or insufficient points.
- Return type:
- compute()[source]
Compute the average Absolute Trajectory Error across all trajectory pairs.
- Return type:
- Returns:
Average ATE as a scalar tensor. Lower values indicate better trajectory tracking performance.
- Raises:
RuntimeError – If no trajectories have been recorded.
- class robometric_frame.trajectory_quality.CurvatureChange(**kwargs)[source]
Compute Curvature Change for robotics policy trajectory evaluation.
- Curvature Change is calculated as:
CC = (1/(L-2)) * Σ(i=1 to L-2) |κ_{i+1} - κ_i|
where κ_i = (θ_{i+1} - θ_i) / |p_{i+1} - p_i|_2
Here, p_i are trajectory positions, θ_i are orientations (heading angles), and κ_i is the curvature at segment i. Unlike path smoothness, this metric incorporates angular velocity and is particularly useful for evaluating car-like mobile robots where curvature relates to turning radius constraints.
This metric accumulates curvature change values across multiple trajectories and returns the average when compute() is called.
- Parameters:
**kwargs (
Any) – Additional keyword arguments passed to the base Metric class.
Example
>>> from robometric_frame.trajectory_quality import CurvatureChange >>> import torch >>> metric = CurvatureChange() >>> # Straight line motion (constant orientation) >>> positions = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0], [3.0, 0.0]]) >>> orientations = torch.tensor([0.0, 0.0, 0.0, 0.0]) >>> metric.update(positions, orientations) >>> metric.compute() tensor(0.0000)
- Example (with turn):
>>> # Path with a turn >>> metric = CurvatureChange() >>> positions = torch.tensor([ ... [0.0, 0.0], ... [1.0, 0.0], ... [2.0, 0.0], ... [3.0, 1.0] ... ]) >>> # Orientations change from 0 to π/4 radians >>> orientations = torch.tensor([0.0, 0.0, 0.0, 0.785]) >>> metric.update(positions, orientations) >>> result = metric.compute()
- Example (batched):
>>> # Batch of trajectory pairs - shape (B, L, D) >>> metric = CurvatureChange() >>> positions_batch = torch.tensor([ ... [[0.0, 0.0], [1.0, 0.0], [2.0, 0.0], [3.0, 0.0]], ... [[0.0, 0.0], [0.0, 1.0], [0.0, 2.0], [0.0, 3.0]] ... ]) >>> orientations_batch = torch.tensor([ ... [0.0, 0.0, 0.0, 0.0], ... [1.57, 1.57, 1.57, 1.57] ... ]) >>> metric.update(positions_batch, orientations_batch) >>> result = metric.compute()
- Example (circular path):
>>> # Circular motion with constant curvature >>> metric = CurvatureChange() >>> import math >>> angles = torch.linspace(0, math.pi/2, 10) >>> positions = torch.stack([torch.cos(angles), torch.sin(angles)], dim=1) >>> orientations = angles + math.pi/2 # Tangent direction >>> metric.update(positions, orientations) >>> result = metric.compute() # Should be small for smooth circular motion
- Example (distributed):
>>> # In distributed training, metrics are automatically synced >>> metric = CurvatureChange() >>> # On GPU 0 >>> pos_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> ori_gpu0 = torch.tensor([0.0, 0.0, 0.0]) >>> metric.update(pos_gpu0, ori_gpu0) >>> # On GPU 1 >>> pos_gpu1 = torch.tensor([[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]]) >>> ori_gpu1 = torch.tensor([1.57, 1.57, 1.57]) >>> metric.update(pos_gpu1, ori_gpu1) >>> # Final result aggregates across all GPUs >>> result = metric.compute()
- update(positions, orientations)[source]
Update metric state with new trajectory or batch of trajectories.
- Parameters:
positions (
Tensor) –Position trajectory tensor of shape (…, L, D) where: - … represents any number of batch dimensions (can be empty) - L is the number of points (must be >= 3) - D is the spatial dimensionality (typically 2 for mobile robots)
Examples of valid shapes: - (L, D): Single trajectory - (B, L, D): Batch of B trajectories - (B, T, L, D): Batch of B sequences with T slices each
Points should be ordered chronologically along the L dimension.
orientations (
Tensor) –Orientation (heading angle) tensor of shape (…, L) where: - … represents the same batch dimensions as positions - L is the number of points (must match positions) - Values are heading angles in radians
Must have the same batch dimensions and L as positions, but without the spatial dimension D.
- Raises:
ValueError – If trajectories have invalid shape, mismatched shapes, or insufficient points.
- Return type:
- compute()[source]
Compute the average Curvature Change across all trajectories.
- Return type:
- Returns:
Average curvature change as a scalar tensor. Lower values indicate smoother trajectories with more consistent turning behavior.
- Raises:
RuntimeError – If no trajectories have been recorded.
- class robometric_frame.trajectory_quality.PathLength(**kwargs)[source]
Compute Path Length for robotics policy trajectory evaluation.
- Path Length is calculated as:
PL = Σ(i=1 to L-1) |p_{i+1} - p_i|_2
where p_i are trajectory points in D-dimensional space and L is the length of the trajectory. Shorter paths generally indicate more efficient task execution.
This metric accumulates path lengths across multiple trajectories and returns the average path length when compute() is called.
- Parameters:
**kwargs (
Any) – Additional keyword arguments passed to the base Metric class.
Example
>>> from robometric_frame.trajectory_quality import PathLength >>> import torch >>> metric = PathLength() >>> # 2D trajectory with 5 points >>> trajectory = torch.tensor([ ... [0.0, 0.0], ... [1.0, 0.0], ... [1.0, 1.0], ... [2.0, 1.0], ... [2.0, 2.0] ... ]) >>> metric.update(trajectory) >>> metric.compute() tensor(4.0000)
- Example (batched):
>>> # Batch of trajectories - shape (B, L, D) >>> metric = PathLength() >>> batch = torch.tensor([ ... [[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]], # trajectory 1 ... [[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]] # trajectory 2 ... ]) >>> metric.update(batch) >>> metric.compute() # Average of 2.0 and 2.0 tensor(2.0000)
- Example (3D trajectories):
>>> # 3D trajectory >>> metric = PathLength() >>> trajectory_3d = torch.tensor([ ... [0.0, 0.0, 0.0], ... [1.0, 0.0, 0.0], ... [1.0, 1.0, 0.0], ... [1.0, 1.0, 1.0] ... ]) >>> metric.update(trajectory_3d) >>> metric.compute() tensor(3.0000)
- Example (distributed):
>>> # In distributed training, metrics are automatically synced >>> metric = PathLength() >>> # On GPU 0 >>> traj_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0]]) >>> metric.update(traj_gpu0) >>> # On GPU 1 >>> traj_gpu1 = torch.tensor([[0.0, 0.0], [0.0, 1.0]]) >>> metric.update(traj_gpu1) >>> # Final result aggregates across all GPUs >>> result = metric.compute() # Returns aggregated average path length
- update(trajectory)[source]
Update metric state with new trajectory or batch of trajectories.
- Parameters:
trajectory (
Tensor) –Tensor of shape (…, L, D) where: - … represents any number of batch dimensions (can be empty) - L is the number of points (must be >= 2) - D is the spatial dimensionality (e.g., 2 for 2D, 3 for 3D)
Examples of valid shapes: - (L, D): Single trajectory - (B, L, D): Batch of B trajectories - (B, T, L, D): Batch of B sequences with T slices each
Points should be ordered chronologically along the L dimension.
- Raises:
ValueError – If trajectory has invalid shape or insufficient points.
- Return type:
- compute()[source]
Compute the average Path Length across all trajectories.
- Return type:
- Returns:
Average path length as a scalar tensor.
- Raises:
RuntimeError – If no trajectories have been recorded.
- class robometric_frame.trajectory_quality.PathSmoothness(**kwargs)[source]
Compute Path Smoothness for robotics policy trajectory evaluation.
- Path Smoothness is calculated as:
PS = (1/PL) * Σ(i=1 to L-2) |(p_{i+2} - p_{i+1}) - (p_{i+1} - p_i)|_2
where p_i are trajectory points in D-dimensional space, L is the length of the trajectory, and PL is the path length. This metric measures the rate of change in trajectory direction, with lower values indicating smoother paths.
The metric calculates the difference between consecutive displacement vectors, effectively measuring the second derivative (acceleration) of the path. It is normalized by the total path length to make it scale-invariant.
This metric accumulates smoothness values across multiple trajectories and returns the average path smoothness when compute() is called.
- Parameters:
**kwargs (
Any) – Additional keyword arguments passed to the base Metric class.
Example
>>> from robometric_frame.trajectory_quality import PathSmoothness >>> import torch >>> metric = PathSmoothness() >>> # Smooth straight line (perfect smoothness = 0) >>> trajectory = torch.tensor([ ... [0.0, 0.0], ... [1.0, 0.0], ... [2.0, 0.0], ... [3.0, 0.0] ... ]) >>> metric.update(trajectory) >>> metric.compute() tensor(0.0000)
- Example (with direction change):
>>> # Path with a turn (higher smoothness value) >>> metric = PathSmoothness() >>> trajectory = torch.tensor([ ... [0.0, 0.0], ... [1.0, 0.0], ... [2.0, 0.0], ... [2.0, 1.0] ... ]) >>> metric.update(trajectory) >>> result = metric.compute() >>> result > 0 # Non-zero smoothness due to direction change tensor(True)
- Example (batched):
>>> # Batch of trajectories - shape (B, L, D) >>> metric = PathSmoothness() >>> batch = torch.tensor([ ... [[0.0, 0.0], [1.0, 0.0], [2.0, 0.0], [3.0, 0.0]], # smooth ... [[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [2.0, 1.0]] # has turn ... ]) >>> metric.update(batch) >>> result = metric.compute() # Average smoothness
- Example (3D trajectories):
>>> # 3D trajectory >>> metric = PathSmoothness() >>> trajectory_3d = torch.tensor([ ... [0.0, 0.0, 0.0], ... [1.0, 0.0, 0.0], ... [2.0, 0.0, 0.0], ... [3.0, 0.0, 0.0] ... ]) >>> metric.update(trajectory_3d) >>> metric.compute() # Perfect smoothness for straight line tensor(0.0000)
- Example (distributed):
>>> # In distributed training, metrics are automatically synced >>> metric = PathSmoothness() >>> # On GPU 0 >>> traj_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> metric.update(traj_gpu0) >>> # On GPU 1 >>> traj_gpu1 = torch.tensor([[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]]) >>> metric.update(traj_gpu1) >>> # Final result aggregates across all GPUs >>> result = metric.compute() # Returns aggregated average smoothness
- update(trajectory)[source]
Update metric state with new trajectory or batch of trajectories.
- Parameters:
trajectory (
Tensor) –Tensor of shape (…, L, D) where: - … represents any number of batch dimensions (can be empty) - L is the number of points (must be >= 3) - D is the spatial dimensionality (e.g., 2 for 2D, 3 for 3D)
Examples of valid shapes: - (L, D): Single trajectory - (B, L, D): Batch of B trajectories - (B, T, L, D): Batch of B sequences with T slices each
Points should be ordered chronologically along the L dimension.
- Raises:
ValueError – If trajectory has invalid shape or insufficient points.
- Return type:
- compute()[source]
Compute the average Path Smoothness across all trajectories.
- Return type:
- Returns:
Average path smoothness as a scalar tensor. Lower values indicate smoother trajectories with less direction changes.
- Raises:
RuntimeError – If no trajectories have been recorded.
- class robometric_frame.trajectory_quality.RelativeTrajectoryError(delta=1, **kwargs)[source]
Compute Relative Trajectory Error (RTE) for robotics policy trajectory evaluation.
- RTE is calculated as:
RTE = (1/(L-Δ)) * Σ(i=1 to L-Δ) |(p_{i+Δ} - p_i) - (p_{i+Δ}* - p_i*)|_2
where p_i are predicted trajectory points, p_i* are reference (ground truth) trajectory points, L is the trajectory length, and Δ (delta) is the step size for computing relative motion.
RTE assesses local accuracy by comparing displacement vectors between the predicted and reference trajectories. Unlike ATE which measures global consistency, RTE focuses on the correctness of relative motion, making it particularly useful for evaluating drift and local tracking performance.
This metric accumulates errors across multiple trajectory pairs and returns the average RTE when compute() is called.
- Parameters:
Example
>>> from robometric_frame.trajectory_quality import RelativeTrajectoryError >>> import torch >>> metric = RelativeTrajectoryError(delta=1) >>> # Perfect prediction (zero error) >>> predicted = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> reference = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> metric.update(predicted, reference) >>> metric.compute() tensor(0.0000)
- Example (with drift):
>>> # Prediction with constant drift in motion >>> metric = RelativeTrajectoryError(delta=1) >>> predicted = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.5]]) >>> reference = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> metric.update(predicted, reference) >>> result = metric.compute()
- Example (larger delta):
>>> # Using delta=2 to check motion over 2-step windows >>> metric = RelativeTrajectoryError(delta=2) >>> predicted = torch.tensor([ ... [0.0, 0.0], [1.0, 0.0], [2.0, 0.0], [3.0, 0.0] ... ]) >>> reference = torch.tensor([ ... [0.0, 0.0], [1.0, 0.0], [2.0, 0.0], [3.0, 0.0] ... ]) >>> metric.update(predicted, reference) >>> metric.compute() tensor(0.0000)
- Example (batched):
>>> # Batch of trajectory pairs - shape (B, L, D) >>> metric = RelativeTrajectoryError(delta=1) >>> predicted_batch = torch.tensor([ ... [[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]], ... [[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]] ... ]) >>> reference_batch = torch.tensor([ ... [[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]], ... [[0.0, 0.0], [0.0, 1.0], [0.0, 2.0]] ... ]) >>> metric.update(predicted_batch, reference_batch) >>> metric.compute() tensor(0.0000)
- Example (3D trajectories):
>>> # 3D trajectory comparison >>> metric = RelativeTrajectoryError(delta=1) >>> predicted = torch.tensor([ ... [0.0, 0.0, 0.0], ... [1.0, 0.0, 0.0], ... [1.0, 1.0, 0.0] ... ]) >>> reference = torch.tensor([ ... [0.0, 0.0, 0.0], ... [1.0, 0.0, 0.0], ... [1.0, 1.0, 1.0] ... ]) >>> metric.update(predicted, reference) >>> result = metric.compute()
- Example (distributed):
>>> # In distributed training, metrics are automatically synced >>> metric = RelativeTrajectoryError(delta=1) >>> # On GPU 0 >>> pred_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> ref_gpu0 = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> metric.update(pred_gpu0, ref_gpu0) >>> # On GPU 1 >>> pred_gpu1 = torch.tensor([[0.0, 0.0], [1.0, 1.0], [2.0, 2.0]]) >>> ref_gpu1 = torch.tensor([[0.0, 0.0], [1.0, 0.0], [2.0, 0.0]]) >>> metric.update(pred_gpu1, ref_gpu1) >>> # Final result aggregates across all GPUs >>> result = metric.compute()
- __init__(delta=1, **kwargs)[source]
Initialize the RelativeTrajectoryError metric.
- Parameters:
- Raises:
ValueError – If delta is less than 1.
- update(predicted, reference)[source]
Update metric state with new predicted and reference trajectory pair(s).
- Parameters:
predicted (
Tensor) –Predicted trajectory tensor of shape (…, L, D) where: - … represents any number of batch dimensions (can be empty) - L is the number of points (must be > delta) - D is the spatial dimensionality (e.g., 2 for 2D, 3 for 3D)
Examples of valid shapes: - (L, D): Single trajectory - (B, L, D): Batch of B trajectories - (B, T, L, D): Batch of B sequences with T slices each
Points should be ordered chronologically along the L dimension.
reference (
Tensor) – Reference (ground truth) trajectory tensor with the same shape as predicted.
- Raises:
ValueError – If trajectories have invalid shape, mismatched shapes, or insufficient points.
- Return type:
- compute()[source]
Compute the average Relative Trajectory Error across all trajectory pairs.
- Return type:
- Returns:
Average RTE as a scalar tensor. Lower values indicate better local tracking performance and less drift.
- Raises:
RuntimeError – If no trajectories have been recorded.
Modules
Absolute Trajectory Error (ATE) metric for robotics policy trajectory evaluation. |
|
Curvature Change metric for robotics policy trajectory evaluation. |
|
Path Length metric for robotics policy trajectory evaluation. |
|
Path Smoothness metric for robotics policy trajectory evaluation. |
|
Relative Trajectory Error (RTE) metric for robotics policy trajectory evaluation. |