Skip to main content
Version: dev-latest

Multi-Camera Calibration

MetriCal's multi-camera calibration is a joint process, which includes calibrating both intrinsics and extrinsics simultaneously. This guide provides specific tips for calibrating multiple cameras together.

Multi-Camera Calibration Tutorial

Calibration Guidelines

Data Capture Best Practices

DODON'T
✅ Ensure target is visible to multiple cameras simultaneously.❌ Calibrate each camera independently without overlap.
✅ Maximize overlap between camera views.❌ Have overlap only at the peripheries of wide-angle lenses.
✅ Keep targets in focus in all cameras.❌ Capture blurry or out-of-focus images in any camera.
✅ Capture the target across the entire field of view for each camera.❌ Only place the target in a small part of each camera's field of view.
✅ Rotate the target 90° for some captures.❌ Keep the target in only one orientation.
✅ Capture the target from various angles to maximize convergence.❌ Only capture the target from similar angles.
✅ Pause between poses to avoid motion blur.❌ Move the target continuously during capture.

Maximize Overlap Between Images

While it's important to fill the full field-of-view of each individual camera to determine distortions, for multi-camera calibration, cameras must jointly observe the same object space to determine the relative extrinsics between them.

Once you've observed across the entire field-of-view of each camera individually, focus on capturing the object space in multiple cameras from the same position.

The location of this overlap is also important. For example, when working with very-wide field-of-view lenses, having overlap only at the peripheries can sometimes produce odd results, because the overlap is largely contained in high distortion areas of the image. Aim for overlap in varying regions of the cameras' fields of view.

Basic Camera Calibration Principles

All of the principles that apply to single camera calibration also apply to each camera in a multi-camera setup:

Keep Targets in Focus

Ensure all cameras in your system are focused properly. A lens focused at infinity is recommended for calibration. Knowing the depth of field for each camera helps ensure you never get blurry images in your data.

Consider Target Orientations

Collect data where the target is captured at both 0° and 90° orientations to de-correlate errors in x and y measurements. This applies to all cameras in your multi-camera setup.

Fill the Full Field of View

For each camera in your setup, ensure you capture the target across the entire field of view, especially near the edges where distortion is greatest.

Maximize Convergence Angles

The convergence angle of each camera's pose relative to the object space is important. Aim for convergence angles of 70° or greater when possible.

Running a Multi-Camera Calibration

Direct Multi-Camera Calibration

If all your cameras can simultaneously view calibration targets, you can run a direct multi-camera calibration:

# Initialize the calibration
metrical init -m /camera1:eucm -m /camera2:eucm $INIT_PLEX

# Run the calibration
metrical calibrate -o $RESULTS $DATA $INIT_PLEX $OBJ

This approach calibrates both intrinsics and extrinsics in a single step.

Staged Calibration for Complex Camera Rigs

For large or complex camera rigs where:

  • Cameras are mounted far apart
  • Some cameras are in hard-to-reach positions
  • It's difficult to have all cameras view targets simultaneously

A staged approach is recommended:

# Step 1: Calibrate individual cameras (intrinsics only)
# Front camera
metrical init -m /front_cam:eucm $FRONT_INIT_PLEX
metrical calibrate -o $FRONT_CAM_RESULTS $FRONT_CAM_DATA $FRONT_INIT_PLEX $OBJ

# Back camera
metrical init -m /back_cam:eucm $BACK_INIT_PLEX
metrical calibrate -o $BACK_CAM_RESULTS $BACK_CAM_DATA $BACK_INIT_PLEX $OBJ

# Step 2: Run an extrinsics-only calibration using the individual results
metrical init -p $FRONT_CAM_RESULTS -p $BACK_CAM_RESULTS -m *cam*:eucm $RIG_INIT_PLEX
metrical calibrate -o $RIG_RESULTS $RIG_DATA $RIG_INIT_PLEX $OBJ

This staged approach allows you to:

  1. Capture optimal data for each camera's intrinsics (getting close to each camera to fill its FOV)
  2. Use a separate dataset with targets visible to multiple cameras to determine extrinsics
  3. Avoid the logistical challenges of trying to get optimal data for all cameras simultaneously

The final calibration ($RIG_RESULTS) will contain both intrinsics and extrinsics for all cameras.

For details on calibrating individual cameras, see the Single Camera Calibration guide.

Advanced Considerations

Using Multiple Object Spaces

When working with multiple cameras, using multiple object spaces (calibration targets) or a non-planar target can be particularly beneficial. This provides:

  1. Better depth variation, which helps reduce projective compensation
  2. More opportunities for overlap between cameras with different fields of view or orientations
  3. Improved extrinsics estimation between cameras

For calibrating cameras with minimal overlap in their fields of view, using multiple targets at different positions can help create indirect connections between cameras that don't directly observe the same space.

Troubleshooting

If you encounter errors during calibration, please refer to our Errors documentation.

Common issues with multi-camera calibration include:

  • No features detected (cal-calibrate-001)
  • Initial camera pose estimates failed to converge (cal-calibrate-010)
  • Compatible component type has no detections (cal-calibrate-008)

Remember that for successful multi-camera calibration, it's essential that the cameras have overlapping views of the calibration target during at least some portion of the data capture process.