Using a URDF as a System Specification
If your robot already has a URDF describing its sensor layout, you can use those mechanical
transforms as initial guesses for calibration rather than starting from scratch. MetriCal's
spec from-urdf command extracts the fixed-joint transforms from a URDF and produces a
system specification JSON file; that file then seeds
spatial constraints and camera intrinsics during
plex learn, giving the optimizer a much
better starting point.
Prerequisites
Before starting, familiarize yourself with:
- The system specification format and how MetriCal uses it during plex learning.
- The
spec from-urdfcommand reference for a full list of flags and options.
Step 1: Convert the URDF
Run metrical spec from-urdf to extract a system specification from your URDF. Only joints of type
fixed are extracted; revolute, prismatic, and other movable joints are skipped, since MetriCal
calibrates rigid transforms between sensors.
Basic conversion
metrical spec from-urdf robot.urdf -o system_spec.json
This produces a JSON file containing the mechanical_layout derived from every fixed joint in the
URDF.
Specifying camera properties
URDFs do not distinguish cameras from other links, so you need to tell MetriCal which links are
cameras. The --camera flag accepts the link name, a coordinate basis, and an optional diagonal
field of view in degrees:
metrical spec from-urdf robot.urdf \
--camera left_camera:FLU:90 \
--camera right_camera:RDF \
-o system_spec.json
This populates the camera_bases and camera_field_of_view sections of the output. Providing a
diagonal field of view helps MetriCal seed a more accurate initial focal length; this is especially
useful for narrow or wide-angle lenses.
ROS URDFs typically define two links per camera:
camera_link(or similar): the physical mount frame, usually in the FLU (Forward-Left-Up) convention.camera_optical_frame: a child link rotated into the RDF (Right-Down-Forward) optical convention.
Point --camera at whichever link you want to use in your system specification and set the basis to
match that frame's convention. If you use the optical frame link, the basis is RDF; if you use the
mount link, it is typically FLU. Getting this wrong will silently produce incorrect extrinsics, so
double-check which frame your URDF link represents.
Topic name remapping
URDF link names rarely match your dataset's topic names directly. Three flags let you bridge the gap:
--topic-prefixprepends a string to each link name. For example,--topic-prefix /transformsleft_camerainto/left_camera.--strip-prefixremoves a prefix from each link name before the topic prefix is applied. For example,--strip-prefix robot_transformsrobot_left_cameraintoleft_camera.--remapmaps an individual link to an explicit topic name, bypassing both prefix options.
These can be combined freely:
metrical spec from-urdf robot.urdf \
--strip-prefix robot_ \
--topic-prefix /sensors/ \
--remap imu_link:/imu/main \
--camera left_cam:FLU:90 \
-o system_spec.json
In this example, a link named robot_left_cam becomes /sensors/left_cam; the link imu_link is
remapped directly to /imu/main.
Step 2: Review and Edit the Spec
Open the generated JSON and verify that the values look reasonable. The system specification format supports comments, so you can annotate values as you review.
Key fields to check:
mechanical_layout: Confirm that the translation and rotation values match your expectations from CAD. Watch for sign and axis conventions.camera_bases: Verify that each camera's basis matches the coordinate convention of the URDF link you pointed--cameraat (see the URDF conventions note above).camera_field_of_view: If you provided a DFoV, confirm it matches the datasheet value for each camera.
Step 3: Use the Spec in Plex Learn
Pass the system specification to metrical plex learn alongside your dataset:
metrical plex learn \
-S system_spec.json \
--dataset data.mcap \
-m '*cam*:opencv-radtan' \
-m '*lidar*:lidar' \
-o plex.json
The spec seeds spatial constraints with your mechanical layout values, applies basis corrections for non-RDF cameras, and uses the DFoV entries to compute initial focal length estimates. These values take precedence over purely data-driven estimates, on the assumption that mechanical measurements from CAD are more trustworthy than naive inference.
Step 4: Calibrate
With the learned plex in hand, run calibration as you normally would:
metrical calibrate \
--dataset data.mcap \
--input-plex plex.json \
--input-object-space targets.json \
--results calibrated.json
The mechanical priors from the system specification give the optimizer a strong starting point;
calibration refines those values using the observed data. For more details, see the
calibrate command reference.
Full Manifest Example
In practice, the URDF conversion is a one-time operation; your mechanical layout does not change
between calibration runs. Most users run metrical spec from-urdf once on the command line (as
shown above), check the output into version control, and then reference the resulting file as a
variable in every subsequent manifest run.
That said, you can also include spec-from-urdf as a manifest stage for full reproducibility. The
manifest below shows both approaches; choose whichever fits your workflow.
Option A: Pre-generated spec as a variable
[variables.dataset]
description = "Path to the input MCAP dataset."
value = "data.mcap"
[variables.system-spec]
description = "Path to the system specification (generated from URDF)."
value = "system_spec.json"
[variables.object-space]
description = "Path to the object space JSON file."
value = "targets.json"
[stages.create-plex]
command = "plex-new"
output-plex = "{{auto}}"
[stages.learn]
command = "plex-learn"
input-plex = "{{create-plex.output-plex}}"
dataset = "{{variables.dataset}}"
system-specification = "{{variables.system-spec}}"
topic-to-model = [["*cam*", "opencv-radtan"], ["*lidar*", "lidar"]]
output-plex = "{{auto}}"
[stages.calibrate]
command = "calibrate"
dataset = "{{variables.dataset}}"
input-plex = "{{learn.output-plex}}"
input-object-space = "{{variables.object-space}}"
results = "{{auto}}"
[stages.shape]
command = "shape-mst"
input-plex = "{{calibrate.results}}"
shaped-plex = "{{auto}}"
Option B: URDF conversion as a manifest stage
[variables.dataset]
description = "Path to the input MCAP dataset."
value = "data.mcap"
[variables.urdf]
description = "Path to the robot URDF file."
value = "robot.urdf"
[variables.object-space]
description = "Path to the object space JSON file."
value = "targets.json"
[stages.from-urdf]
command = "spec-from-urdf"
input-urdf = "{{variables.urdf}}"
topic-prefix = "/"
camera = ["left_cam:FLU:90", "right_cam:RDF"]
output = "{{auto}}"
[stages.create-plex]
command = "plex-new"
output-plex = "{{auto}}"
[stages.learn]
command = "plex-learn"
input-plex = "{{create-plex.output-plex}}"
dataset = "{{variables.dataset}}"
system-specification = "{{from-urdf.output}}"
topic-to-model = [["*cam*", "opencv-radtan"], ["*lidar*", "lidar"]]
output-plex = "{{auto}}"
[stages.calibrate]
command = "calibrate"
dataset = "{{variables.dataset}}"
input-plex = "{{learn.output-plex}}"
input-object-space = "{{variables.object-space}}"
results = "{{auto}}"
[stages.shape]
command = "shape-mst"
input-plex = "{{calibrate.results}}"
shaped-plex = "{{auto}}"