Skip to main content
Version: dev-latest

Multi-Camera Calibration

MetriCal's multi-camera calibration is a joint process, which includes calibrating both intrinsics and extrinsics simultaneously. This guide provides specific tips for calibrating multiple cameras together.

Video Tutorial

MetriCal Workflows

MetriCal offers two approaches to multi-camera calibration: combined (all-at-once) and staged. The best approach depends on your specific hardware setup and calibration requirements.

Combined Calibration Approach

If all your cameras can simultaneously view calibration targets, you can run a direct multi-camera calibration:

# Initialize the calibration
metrical init -m /camera1:eucm -m /camera2:eucm $INIT_PLEX

# Run the calibration
metrical calibrate -o $RESULTS $DATA $INIT_PLEX $OBJ

This approach calibrates both intrinsics and extrinsics in a single step.

Staged Calibration Approach for Complex Camera Rigs

For large or complex camera rigs where:

  • Cameras are mounted far apart
  • Some cameras are in hard-to-reach positions
  • It's difficult to have all cameras view targets simultaneously

A staged approach is recommended:

# Step 1: Calibrate individual cameras (intrinsics only)
# Front camera
metrical init -m /front_cam:eucm $FRONT_INIT_PLEX
metrical calibrate -o $FRONT_CAM_RESULTS $FRONT_CAM_DATA $FRONT_INIT_PLEX $OBJ

# Back camera
metrical init -m /back_cam:eucm $BACK_INIT_PLEX
metrical calibrate -o $BACK_CAM_RESULTS $BACK_CAM_DATA $BACK_INIT_PLEX $OBJ

# Step 2: Run an extrinsics-only calibration using the individual results
metrical init -p $FRONT_CAM_RESULTS -p $BACK_CAM_RESULTS -m *cam*:eucm $RIG_INIT_PLEX
metrical calibrate -o $RIG_RESULTS $RIG_DATA $RIG_INIT_PLEX $OBJ

This staged approach allows you to:

  1. Capture optimal data for each camera's intrinsics (getting close to each camera to fill its FOV)
  2. Use a separate dataset with targets visible to multiple cameras to determine extrinsics
  3. Avoid the logistical challenges of trying to get optimal data for all cameras simultaneously

The final calibration ($RIG_RESULTS) will contain both intrinsics and extrinsics for all cameras.

For details on calibrating individual cameras, see the Single Camera Calibration guide.

Practical Example

We've captured an example of a good multi-camera calibration dataset that you can use to test out MetriCal. If it's your first time performing a multi-camera calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like.

Running the Example Dataset

First, download the dataset here and unzip it somewhere on your computer. Then, copy the following bash script into the dataset directory:

warning

This script assumes that you have either installed the apt version of MetriCal, or that you are using the docker version with an alias set to metrical. For more information, please review the MetriCal installation instructions.

#!/bin/bash -i

DATA=observations
INIT_PLEX=init_plex.json
OBJ=obj.json
OUTPUT=results.json
REPORT=results.html

metrical init \
-y \
-m color:opencv_radtan \
-m ir*:no_distortion \
$DATA $INIT_PLEX
metrical calibrate \
--disable-motion-filter \
--report-path $REPORT \
-o $OUTPUT $DATA $INIT_PLEX $OBJ

Before running the script, let's take note of a couple things:

  • The metrical init command uses the -m flag to describe the system being calibrated. It indicates that the color topic should be calibrated with the opencv_radtan model, and that any topics prefixed with ir are already rectified, and do not need intrinsics calculated. The configuration generated by metrical init is saved to a file named plex.json, which will be used during the calibration step to configure the system. You can learn more about plexes here.
  • metrical calibrate is being passed a --disable-motion-filter value because our example dataset is sequence of still frames rather than a full capture. If were were to remove this flag, all of our data would be filtered out because MetriCal would see very large jumps in position between frames. If you are calibrating a full .mcap, we generally recommend removing this flag and letting the motion filter run. You can read more about motion filtering here.

Finally, note the --render flag being passed to metrical calibrate. This flag will allow us to watch the detection phase of the calibration as it happens in realtime. This can have a large impact on performance, but is invaluable for debugging data quality issues. Rendering has been enabled here so that you can watch the dataset as it's being processed.

warning

MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the --render flag. Please ensure that you've followed the visualization configuration instructions before running this script.

You should now be ready to run the script. When you start it, it will display a visualization window like the following with detections overlaid on the camera frames:

A screenshot of the MetriCal detection visualization

While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of camera coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture.

When the script finishes, you'll be left with three artifacts:

  • plex.json - as described in the prior section.
  • report.html - a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report here.
  • results.json - a file containing the final calibration and various other metrics. You can learn more about results json files here and about manimulating your results using shape commands here.

And that's it! Hopefully this trial run will have given you a better understanding of how to capture your own multi-camera calibration.

Data Capture Guidelines

Best Practices

DODON'T
✅ Ensure target is visible to multiple cameras simultaneously.❌ Calibrate each camera independently without overlap.
✅ Maximize overlap between camera views.❌ Have overlap only at the peripheries of wide-angle lenses.
✅ Keep targets in focus in all cameras.❌ Capture blurry or out-of-focus images in any camera.
✅ Capture the target across the entire field of view for each camera.❌ Only place the target in a small part of each camera's field of view.
✅ Rotate the target 90° for some captures.❌ Keep the target in only one orientation.
✅ Capture the target from various angles to maximize convergence.❌ Only capture the target from similar angles.
✅ Pause between poses to avoid motion blur.❌ Move the target continuously during capture.

Maximize Overlap Between Images

While it's important to fill the full field-of-view of each individual camera to determine distortions, for multi-camera calibration, cameras must jointly observe the same object space to determine the relative extrinsics between them.

Once you've observed across the entire field-of-view of each camera individually, focus on capturing the object space in multiple cameras from the same position.

The location of this overlap is also important. For example, when working with very-wide field-of-view lenses, having overlap only at the peripheries can sometimes produce odd results, because the overlap is largely contained in high distortion areas of the image. Aim for overlap in varying regions of the cameras' fields of view.

Basic Camera Calibration Principles

All of the principles that apply to single camera calibration also apply to each camera in a multi-camera setup:

Keep Targets in Focus

Ensure all cameras in your system are focused properly. A lens focused at infinity is recommended for calibration. Knowing the depth of field for each camera helps ensure you never get blurry images in your data.

Consider Target Orientations

Collect data where the target is captured at both 0° and 90° orientations to de-correlate errors in x and y measurements. This applies to all cameras in your multi-camera setup.

Fill the Full Field of View

For each camera in your setup, ensure you capture the target across the entire field of view, especially near the edges where distortion is greatest.

Maximize Convergence Angles

The convergence angle of each camera's pose relative to the object space is important. Aim for convergence angles of 70° or greater when possible.

Advanced Considerations

Using Multiple Object Spaces

When working with multiple cameras, using multiple object spaces (calibration targets) or a non-planar target can be particularly beneficial. This provides:

  1. Better depth variation, which helps reduce projective compensation
  2. More opportunities for overlap between cameras with different fields of view or orientations
  3. Improved extrinsics estimation between cameras

For calibrating cameras with minimal overlap in their fields of view, using multiple targets at different positions can help create indirect connections between cameras that don't directly observe the same space.

Troubleshooting

If you encounter errors during calibration, please refer to our Errors documentation.

Common issues with multi-camera calibration include:

  • No features detected (cal-calibrate-001)
  • Initial camera pose estimates failed to converge (cal-calibrate-010)
  • Compatible component type has no detections (cal-calibrate-008)

Remember that for successful multi-camera calibration, it's essential that the cameras have overlapping views of the calibration target during at least some portion of the data capture process.