Camera ↔ LiDAR Calibration
All of MetriCal's LiDAR calibrations require a circle target to be used in the object space.
Camera ↔ LiDAR Tutorial
Camera ↔ LiDAR Guidelines
LiDAR ↔ Camera calibrations are currently an extrinsics-only calibration. Note that while we only solve for the extrinsics between the LiDAR and the camera components, the calibration is still a joint calibration as camera intrinsics are still estimated as well.
Data Capture Best Practices
DO | DON'T |
---|---|
✅ Vary distance from sensors to target a few times to capture a wider range of readings. | ❌ Stand at a single distance from the sensors. |
✅ Make sure some of your captures fill much of the camera field of view. | ❌ Only capture poses where the target does not capture much of the cameras' fields of view. |
✅ Rotate the target on its planar axis periodically to capture poses of the board at different orientations. | ❌ Keep the board in a single orientation for every pose. |
✅ Maximize convergence angles by occasionally angling the target up/down and left/right during poses. | ❌ Keep the target in a single orientation, thereby minimizing convergence angles. |
✅ Pause in between poses to eliminate motion blur effects. | ❌ Move the target constantly or rapidly during data capture. |
✅ Bring the target close to each camera lens and capture target poses across each camera's entire field of view. | ❌ Stay distant from the cameras or only capture poses that cover part of each camera's field of view. |
✅ Achieve convergence angles of 35° or more on each axis. |
Using The Circle Target
MetriCal uses a special target called a Lidar Circle target, or simply "the circle". The circle consists of two parts:
- A markerboard cut into a circle
- Retroreflective tape around the edge of the circle
Its design allows MetriCal to bridge the modalities of camera (a projection of Euclidean space onto a plane) and LiDAR (a representation of Euclidean space).
Since we're using a markerboard to associate the camera with a position in space, all the same data collection considerations for cameras apply:
- Rotate the circle
- Capture several angles
- Capture the circle at different distances
Following camera data collection best practices will also guarantee good data for LiDAR calibration.
MetriCal doesn't currently calibrate LiDAR intrinsics. If you're interested in calibrating the intrinsics of your LiDAR for better accuracy and precision, get in touch!
Key Considerations for Quality Calibration
Maximize Variation Across All 6 Degrees Of Freedom
When capturing the circular target with your camera and LiDAR components, ensure that the target can be seen from a variety of poses by varying:
- Roll, pitch, and yaw rotations of the target relative to the sensors
- X, Y, and Z translations between poses
MetriCal performs LiDAR-based calibrations best when the target can be seen from a range of different depths. Try moving forward and away from the target in addition to moving laterally relative to the sensors.
Pause Between Different Poses
MetriCal performs more consistently if there is less motion blur or motion-based artifacts in the captured data. For the best calibrations, pause for 1-2 seconds after every pose of the board. Constant motion in a dataset will typically yield poor results.
Beware of LiDAR Noise
Some LiDAR sensors can have significant noise when detecting retroreflective surfaces. This can cause a warping effect in the point cloud data, where points are spread out in a way that makes it difficult to detect the true surface of the circle.
For Ouster LiDARs, this is caused by the retroreflective material saturating the photodiodes and
affecting the time-of-flight estimation. To prevent this warping effect, you can lower the signal
strength of the emitted beam by sending a POST
request and modifying the signal_multiplier
to
0.25 as shown in the
Ouster documentation.
Running a Camera-LiDAR Calibration
MetriCal offers two approaches to Camera-LiDAR calibration: combined (all-at-once) and staged (pipeline). The best approach depends on your specific hardware setup and calibration requirements.
Combined Calibration Approach
The combined approach calibrates camera intrinsics and Camera-LiDAR extrinsics simultaneously, which typically yields the most accurate results:
# Initialize the calibration
metrical init -m camera*:eucm -m lidar*:lidar $INIT_PLEX
# Run the calibration
metrical calibrate --render -o $OUTPUT $DATA $INIT_PLEX $OBJ
Parameters:
camera*:eucm
: Assigns the extended unified camera model to all topics named "camera*"lidar*:lidar
: Assigns the LiDAR model to all topics named "lidar*"--render
: Enables visualization-o
: Specifies output file location
This approach works best when:
- Your LiDAR and camera(s) can simultaneously view the calibration target
- The circle target can be positioned to fill the camera's field of view
- You have good control over the target positioning
Staged Calibration Approach
For complex rigs where optimal camera calibration and optimal LiDAR-camera calibration require different setups, a staged approach is recommended:
# Step 1: Calibrate camera intrinsics first
CAMERA_DATA=/path/to/camera_only.mcap # Dataset focused on optimal camera intrinsics
CAMERA_RESULTS=/path/to/camera_cal.json
metrical init -m /camera:eucm $CAMERA_INIT_PLEX
metrical calibrate -o $CAMERA_RESULTS $CAMERA_DATA $CAMERA_INIT_PLEX $OBJ
# Step 2: Use camera intrinsics to seed a Camera-LiDAR calibration
LIDAR_CAM_DATA=/path/to/lidar_camera.mcap # Dataset focused on Camera-LiDAR extrinsics
FINAL_RESULTS=/path/to/final_cal.json
metrical init -p $CAMERA_RESULTS -m /camera:eucm -m /lidar:lidar $LIDAR_CAM_INIT_PLEX
metrical calibrate -o $FINAL_RESULTS $LIDAR_CAM_DATA $LIDAR_CAM_INIT_PLEX $OBJ
This staged approach allows you to:
- Capture optimal data for camera intrinsics (getting close to ensure full FOV coverage)
- Use a separate dataset focused on good LiDAR-camera observations for extrinsics
- Lock in the camera intrinsics during the second calibration stage
The staged approach is particularly useful when:
- Your camera is mounted in a hard-to-reach position
- You need different viewing distances for optimal camera calibration vs. LiDAR-camera calibration
- You're calibrating a complex rig with multiple sensors
For details on camera-only calibration, see the Single Camera Calibration guide and the Multi-Camera Calibration guide.
Analyzing Calibration Results
Residual Metrics
For camera-LiDAR calibration, MetriCal outputs two key metrics:
-
Circle misalignment RMSE: Indicates how well the center of the markerboard (detected in camera space) aligns with the geometric center of the retroreflective circle (in LiDAR space). Typically, values around 3cm or less indicate a good calibration.
-
Interior point-to-plane RMSE (if
detect_interior_points
was enabled): Measures how well the interior points of the circle align with the plane of the circle. Values around 1cm or less are considered good.
Visualizing Results
During calibration, two timelines are created in Rerun:
-
Detections: Shows the detections of all targets in both camera and LiDAR space. Only observations with detections are rendered.
-
Corrections: Visualizes the derived calibration as applied to the dataset and shows the object space with all its spatial constraints.
You can focus on LiDAR detections by double-clicking on detections
in the Rerun interface:
To see the aligned camera-LiDAR view, double-click on a camera in the corrections
space:
If the calibration quality doesn't meet your requirements, consider recapturing data at a slower pace or modifying the target with more/less retroreflectivity.
Troubleshooting
If you encounter errors during calibration, please refer to our Errors documentation.
Common issues with Camera ↔ LiDAR calibration include:
- No features detected (cal-calibrate-001)
- Too many component observations filtered (cal-calibrate-006)
- No compatible component type to register against (cal-calibrate-007)
Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data.