Camera ↔ LiDAR Calibration
All of MetriCal's LiDAR calibrations require a circle target to be used in the object space.
Video Tutorial
MetriCal Workflows
MetriCal offers two approaches to Camera-LiDAR calibration: combined (all-at-once) and staged. The best approach depends on your specific hardware setup and calibration requirements.
Combined Calibration Approach
The combined approach calibrates camera intrinsics and Camera-LiDAR extrinsics simultaneously, which typically yields the most accurate results:
# Initialize the calibration
metrical init -m camera*:eucm -m lidar*:lidar $INIT_PLEX
# Run the calibration
metrical calibrate --render -o $OUTPUT $DATA $INIT_PLEX $OBJ
Parameters:
camera*:eucm
: Assigns the extended unified camera model to all topics named "camera*"lidar*:lidar
: Assigns the LiDAR model to all topics named "lidar*"--render
: Enables visualization-o
: Specifies output file location
This approach works best when:
- Your LiDAR and camera(s) can simultaneously view the calibration target
- The circle target can be positioned to fill the camera's field of view
- You have good control over the target positioning
Staged Calibration Approach for Complex Camera Rigs
For complex rigs where optimal camera calibration and optimal LiDAR-camera calibration require different setups, a staged approach is recommended:
# Step 1: Calibrate camera intrinsics first
CAMERA_DATA=/path/to/camera_only.mcap # Dataset focused on optimal camera intrinsics
CAMERA_RESULTS=/path/to/camera_cal.json
metrical init -m /camera:eucm $CAMERA_INIT_PLEX
metrical calibrate -o $CAMERA_RESULTS $CAMERA_DATA $CAMERA_INIT_PLEX $OBJ
# Step 2: Use camera intrinsics to seed a Camera-LiDAR calibration
LIDAR_CAM_DATA=/path/to/lidar_camera.mcap # Dataset focused on Camera-LiDAR extrinsics
FINAL_RESULTS=/path/to/final_cal.json
metrical init -p $CAMERA_RESULTS -m /camera:eucm -m /lidar:lidar $LIDAR_CAM_INIT_PLEX
metrical calibrate -o $FINAL_RESULTS $LIDAR_CAM_DATA $LIDAR_CAM_INIT_PLEX $OBJ
This staged approach allows you to:
- Capture optimal data for camera intrinsics (getting close to ensure full FOV coverage)
- Use a separate dataset focused on good LiDAR-camera observations for extrinsics
- Lock in the camera intrinsics during the second calibration stage
The staged approach is particularly useful when:
- Your camera is mounted in a hard-to-reach position
- You need different viewing distances for optimal camera calibration vs. LiDAR-camera calibration
- You're calibrating a complex rig with multiple sensors
For details on camera-only calibration, see the Single Camera Calibration guide and the Multi-Camera Calibration guide.
Practical Example
We've captured an example of a good LiDAR ↔ Camera calibration dataset that you can use to test out MetriCal. If it's your first time performing a LiDAR calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like.
Running the Example Dataset
First, download the dataset here and unzip it somewhere on your computer. Then, copy the following bash script into the dataset directory:
This script assumes that you have either installed the apt version of MetriCal, or that you are
using the docker version with an alias set to metrical
. For more information, please review the
MetriCal installation instructions.
#!/bin/bash -i
DATA=./camera_lidar_capture.mcap
INIT_PLEX=init_plex.json
OBJ=obj_circle.json
OUTPUT=results.json
REPORT=results.html
metrical init \
-y \
-m */image_rect_raw:no_distortion \
-m /velodyne*:lidar \
$DATA $INIT_PLEX
metrical calibrate \
--render \
--disable-motion-filter \
--report-path $REPORT \
-o $OUTPUT $DATA $INIT_PLEX $OBJ
The metrical init
command uses the -m
flag to describe the system being calibrated. It indicates
that any topics ending in /image_rect_raw
are from cameras with already rectified images
(no_distortion
), and that lidar topic name starts with /velodyne
. We can use the *
glob
character to capture multiple topics, or to just prevent extra typing when telling MetriCal which
topics to use. The configuration generated by metrical init
is saved to a file named plex.json
,
which will be used during the calibration step to configure the system. You can learn more about
plexes here.
Finally, note the --render
flag being passed to metrical calibrate
. This flag will allow us to
watch the detection phase of the calibration as it happens in realtime. This can have a large impact
on performance, but is invaluable for debugging data quality issues. Rendering has been enabled here
so that you can watch the dataset as it's being processed.
MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun
installed on your machine to use the --render
flag. Please ensure that you've followed the
visualization configuration instructions before running this
script.
You should now be ready to run the script. When you start it, it will display a visualization window like the following with a LiDAR point cloud and detections overlaid on the camera frames. Note that the LiDAR circle detections will show up as red points in the point cloud.
While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of camera coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture.
When the script finishes, you'll be left with three artifacts:
plex.json
- as described in the prior section.report.html
- a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report here.results.json
- a file containing the final calibration and various other metrics. You can learn more about results json files here and about manimulating your results usingshape
commands here.
Residual Metrics
For camera-LiDAR calibration, MetriCal outputs two key metrics:
-
Circle misalignment RMSE: Indicates how well the center of the markerboard (detected in camera space) aligns with the geometric center of the retroreflective circle (in LiDAR space). Typically, values around 3cm or less indicate a good calibration.
-
Interior point-to-plane RMSE (if
detect_interior_points
was enabled): Measures how well the interior points of the circle align with the plane of the circle. Values around 1cm or less are considered good.
Visualizing Results
You can focus on LiDAR detections by double-clicking on detections
in the Rerun interface
To see the aligned camera-LiDAR view, double-click on a camera in the corrections
space:
If the calibration quality doesn't meet your requirements, consider recapturing data at a slower pace or modifying the target with more/less retroreflectivity.
Data Capture Guidelines
LiDAR ↔ Camera calibrations are currently an extrinsics-only calibration. Note that while we only solve for the extrinsics between the LiDAR and the camera components, the calibration is still a joint calibration as camera intrinsics are still estimated as well.
Best Practices
DO | DON'T |
---|---|
✅ Vary distance from sensors to target a few times to capture a wider range of readings. | ❌ Stand at a single distance from the sensors. |
✅ Make sure some of your captures fill much of the camera field of view. | ❌ Only capture poses where the target does not capture much of the cameras' fields of view. |
✅ Rotate the target on its planar axis periodically to capture poses of the board at different orientations. | ❌ Keep the board in a single orientation for every pose. |
✅ Maximize convergence angles by occasionally angling the target up/down and left/right during poses. | ❌ Keep the target in a single orientation, thereby minimizing convergence angles. |
✅ Pause in between poses to eliminate motion blur effects. | ❌ Move the target constantly or rapidly during data capture. |
✅ Bring the target close to each camera lens and capture target poses across each camera's entire field of view. | ❌ Stay distant from the cameras or only capture poses that cover part of each camera's field of view. |
✅ Achieve convergence angles of 35° or more on each axis. |
Using The Circle Target
MetriCal uses a special target called a Lidar Circle target, or simply "the circle". The circle consists of two parts:
- A markerboard cut into a circle
- Retroreflective tape around the edge of the circle
Its design allows MetriCal to bridge the modalities of camera (a projection of Euclidean space onto a plane) and LiDAR (a representation of Euclidean space).
Since we're using a markerboard to associate the camera with a position in space, all the same data collection considerations for cameras apply:
- Rotate the circle
- Capture several angles
- Capture the circle at different distances
Following camera data collection best practices will also guarantee good data for LiDAR calibration.
MetriCal doesn't currently calibrate LiDAR intrinsics. If you're interested in calibrating the intrinsics of your LiDAR for better accuracy and precision, get in touch!
Maximize Variation Across All 6 Degrees Of Freedom
When capturing the circular target with your camera and LiDAR components, ensure that the target can be seen from a variety of poses by varying:
- Roll, pitch, and yaw rotations of the target relative to the sensors
- X, Y, and Z translations between poses
MetriCal performs LiDAR-based calibrations best when the target can be seen from a range of different depths. Try moving forward and away from the target in addition to moving laterally relative to the sensors.
Pause Between Different Poses
MetriCal performs more consistently if there is less motion blur or motion-based artifacts in the captured data. For the best calibrations, pause for 1-2 seconds after every pose of the board. Constant motion in a dataset will typically yield poor results.
Advanced Considerations
Beware of LiDAR Noise
Some LiDAR sensors can have significant noise when detecting retroreflective surfaces. This can cause a warping effect in the point cloud data, where points are spread out in a way that makes it difficult to detect the true surface of the circle.
For Ouster LiDARs, this is caused by the retroreflective material saturating the photodiodes and
affecting the time-of-flight estimation. To prevent this warping effect, you can lower the signal
strength of the emitted beam by sending a POST
request and modifying the signal_multiplier
to
0.25 as shown in the
Ouster documentation.
Troubleshooting
If you encounter errors during calibration, please refer to our Errors documentation.
Common issues with Camera ↔ LiDAR calibration include:
- No features detected (cal-calibrate-001)
- Too many component observations filtered (cal-calibrate-006)
- No compatible component type to register against (cal-calibrate-007)
Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data.