MetriCal Reports
When running a calibration with MetriCal, successful runs generate a comprehensive set of charts and diagnostics to help you understand your calibration quality. This documentation explains each chart section, what metrics they display, and how to interpret them for your calibration workflow.
Color Coding in MetriCal Output
MetriCal uses ANSI terminal codes for colorizing output according to an internal assessment of metric quality:
- █ Cyan : Spectacular (excellent calibration quality)
- █ Green: Good (solid calibration quality)
- █ Orange: Okay, but generally poor (may need improvement)
- █ Red: Bad (likely needs attention)
Note that this quality assessment is based on experience with a variety of datasets and may not accurately reflect your specific calibration needs. Use these colors as a general guide, but always consider the context of your calibration setup and goals.
Chart Sections Overview
MetriCal organizes outputs into six main sections:
- Data Inputs (
DI-*
prefix) - Information about your input data - Camera Modeling (
CM-*
prefix) - Charts showing how well the camera models fit the data - Extrinsics Info (
EI-*
prefix) - Metrics on the spatial relationships between components - Calibrated Plex (
CP-*
prefix) - Results of your calibration - Summary Stats (
SS-*
prefix) - Overall performance metrics - Data Diagnostics - Warnings and advice about your calibration dataset
Let's explore each chart in detail.
Data Inputs Charts (DI)
These charts provide information about the data you provided to MetriCal.
DI-1: Calibration Inputs
This table displays the basic configuration settings used for your calibration run:
- MetriCal Version: The version of MetriCal used for the calibration
- Optimization Profile: The optimization strategy used (Standard, Performance, or Minimize-Error)
- Motion Thresholds: Camera and LiDAR motion thresholds used (or "Disabled" if motion filter was turned off)
- Preserve Input Constraints: Whether input constraints were preserved
- Object Relative Extrinsics Inference: Whether ORE inference was enabled
DI-2: Object Space Descriptions
This table describes the calibration targets (object spaces) used in your dataset:
- Type: The type of target object (e.g., DotMarkers, Charuco, etc.)
- UUID: The unique identifier of the object
- Detector: The detector used (e.g., "Dictionary: Aruco7x7_50") and its description
DI-3: Processed Observation Count
This critical table shows how many observations were processed from your dataset:
- Component: The sensor component name and UUID
- # read: Total number of observations read from the dataset
- # with detections: Number of observations where the detector identified features
- # after quality filter: Number of detections that passed the quality filter
- # after motion filter: Number of detections that passed the motion filter
If there's a significant drop between any of these columns, it may indicate an issue with your dataset or settings. This will be flagged in the diagnostics section.
DI-4: Camera FOV Coverage
This visual chart shows how well your calibration data covers the field of view (FOV) of each camera:
- The chart divides each camera's image into a 10x10 grid
- Each grid cell shows how many features were detected in that region
- Color coding indicates feature density:
- █ Red: No features detected
- █ Orange: 1-15 features detected
- █ Green: 16-50 features detected
- █ Cyan: >50 features detected
Ideally, you want to see a mostly green/cyan grid with minimal red cells, indicating good coverage across the entire FOV. Poor coverage can lead to inaccurate intrinsics calibration.
DI-5: Detection Timeline
This chart visualizes when detections occurred across your dataset timeline:
- X-axis represents time in seconds since the first observation
- Each row represents a different sensor component
- Points indicate timestamps when features were detected
- Components are color-coded for easy differentiation
This helps you visualize how synchronized your sensor data is and identify any gaps in observations. If you expect all of your observations to align nicely, but they aren't aligned at all, it's a sign that your timestamps are not being written or read correctly.
Either way, this table is a good place to start debugging.
Camera Modeling Charts (CM)
These charts show how well the selected camera models fit your calibration data.
CM-1: Binned Reprojection Errors
This heatmap visualizes reprojection errors across the camera's field of view:
- The chart shows a 10x10 grid representing the camera FOV
- Each cell contains the weighted RMSE (Root Mean Square Error) of reprojection errors in that region
- Color coding indicates error magnitude:
- █ Cyan: < 0.1px error
- █ Green: 0.1-0.25px error
- █ Orange: 0.25-1.0px error
- █ Red: > 1px error or no data
Ideally, most cells should be cyan or green. Areas with consistently higher errors (orange/red) may indicate issues with your camera model or lens distortion that isn't being captured correctly.
CM-2: Stereo Pair Rectification Error
For multi-camera setups, this chart shows the stereo rectification error between camera pairs. Any cameras that saw the same targets at the same time are added to a stereo pair.
- Lists each camera pair combination
- Shows various error metrics for stereo rectification
- Indicates how well the extrinsic calibration aligns the two cameras
Lower values indicate better stereo calibration.
Extrinsics Info Charts (EI)
These charts provide metrics on the spatial relationships between your calibrated components.
EI-1: Component Extrinsics Errors
This is a complete summary of all component extrinsics errors (as RMSE) between each pair of components, as described by the Composed Relative Extrinsics metrics. This table is probably one of the most useful when evaluating the quality of a plex's extrinsics calibration. Note that the extrinsics errors are weighted, which means outliers are taken into account. Lower values indicate more precise extrinsic calibration between components.
Rotations are printed as Euler angles, using Extrinsic XYZ convention.
- X, Y, Z (m): Translation errors in meters
- Roll, Pitch, Yaw (°): Rotation errors in degrees
EI-2: IMU Preintegration Errors
This is a complete summary of all IMU Preintegration errors from the system. Notice that IMU preintegration error is with respect to an object space, not a component. The inertial frame of a system is tied to a landmark in space, so it makes sense that an IMU's error would be tied to a target.
Rotations are printed as Euler angles, using Extrinsic XYZ convention.
EI-3: Observed Camera Range of Motion
This critical table shows how much motion was observed for each camera during data collection:
- Z (m): Range of depth (distance variation) observed
- Horizontal angle (°): Range of horizontal motion observed
- Vertical angle (°): Range of vertical motion observed
For accurate calibration, you typically want to see:
- Z range > 1m
- Horizontal angle > 60°
- Vertical angle > 60°
Insufficient range of motion is a common reason for calibration issues, often leading to projective compensation errors.
Calibrated Plex Charts (CP)
These tables display the actual calibration results for your sensor system.
CP-1: Camera Metrics
This table shows the calibrated intrinsic parameters for each camera. Different models will have different interpretations; see the Camera Models page for more.
- Specs: Basic camera specifications (width, height, pixel pitch)
- Projection Model: Calibrated projection parameters (focal length, principal point)
- Distortion Model: Calibrated distortion parameters (varies by model type)
The standard deviations (±) indicate the uncertainty of each parameter.
CP-2: Optimized IMU Metrics
This table presents all IMU metrics derived for every IMU component in a calibration run. The most interesting column for most users is the Intrinsics: scale, shear, rotation, and g sensitivity.
CP-3: Calibrated Extrinsics
This table represents the Minimum Spanning Tree of all spatial constraints in the Plex. Note that this table doesn't print all spatial constraints in the plex; it just takes the "best" constraints possible that would still preserve the structure.
Rotations are printed as Euler angles, using Extrinsic XYZ convention.
- Translation (m): The X, Y, Z position of each component relative to the origin
- Diff from input (mm): How much the calibration changed from the initial values
- Rotation (°): The Roll, Pitch, Yaw rotation of each component in degrees
- Diff from input (°): How much the rotation changed from the initial values
The table also indicates which "subplex" each component belongs to (components that share spatial relationships).
Summary Statistics Charts (SS)
These charts provide overall metrics on calibration quality.
SS-1: Optimization Summary Statistics
This table provides high-level metrics about the optimization process:
- Optimized Object RMSE: The overall reprojection error across all cameras
- Posterior Variance: A statistical measure of the calibration uncertainty
Lower values indicate a more accurate calibration.
SS-2: Camera Summary Statistics
This table summarizes reprojection errors for each camera. Typically, values under 0.5px indicate good calibration, with values under 0.2px being excellent. However, this can vary based on your camera image resolution or camera type.
If two cameras have pixels of different sizes, then it is important to first convert these RMSEs
to some metric size so as to compare them equally. This is what pixel_pitch
in the
Plex API is for: cameras can be compared more equally with that
in mind, as the pixel size between two cameras is not always equal!
SS-3: LiDAR Summary Statistics
The LiDAR Summary Statistics show the Root Mean Square Error (RMSE) of four different types of residual metrics:
- Circle Misalignment, if a camera-lidar pair is co-visible with a lidar circle target.
- Interior Points to Plane Error, if a camera-lidar pair is co-visible with a lidar circle target.
- Paired 3D Point Error, if a lidar-lidar pair is co-visible with a lidar circle target.
- Paired Plane Normal Error, if co-visible LiDAR are present
For a component that has been appropriately modeled (i.e. there are no un-modeled systematic error sources present), this represents the mean quantity of error from observations taken by a single component.
Two LiDAR calibrated simultaneously will have the same RMSE relative to one another. This makes intuitive sense: LiDAR A will have a certain relative error to LiDAR B, but LiDAR B will have that same relative error when compared to LiDAR A. Make sure to take this into account when comparing LiDAR RMSE more generally.
Data Diagnostics
MetriCal performs comprehensive analysis of your calibration data and provides diagnostics to help identify potential issues. These diagnostics are categorized by severity level:
█ High-Risk Diagnostics
Critical issues that likely need to be addressed for reliable calibration:
- Poor Camera Range of Motion: Camera movement range insufficient for calibration
- No Component to Register Against: Missing required component for calibration
- Component Missing Compatible Component Type: Compatible component exists but has no detections
- Component Missing Compatible Component Type Detections: All detections filtered from compatible component
- Motion Filter filtered all component observations: Motion filter removed all observations
- Not Enough Mutual Observations: Camera pair lacks sufficient mutual observations
- Component Shares No Sync Groups: Component has no timestamp overlap with others
- Object Has Large Variances: Object has excessive variance (> 1e-6)
█ Medium-Risk Diagnostics
Issues that should be addressed but may not prevent successful calibration:
- Poor Camera Feature Coverage: Poor feature coverage in camera FOV (< 75%)
- Two or More Spatial Subplexes: Multiple unrelated spatial groups detected
- More Mutual Observations?: Camera pair could benefit from more mutual observations
█ Low-Risk Diagnostics
Advice that might help improve calibration quality:
- Component Has Many Low Quality Detections: High proportion of detections discarded due to quality issues
Output Summary
This is a summary of the output files generated by MetriCal, or instructions on how to access them.
Conclusion
Interpreting MetriCal output charts is key to understanding your calibration quality and identifying areas for improvement. By systematically analyzing each section, you can iteratively improve your calibration process to achieve more accurate results.
Remember that calibration is both an art and a science—experimental design matters greatly, and the metrics provided by MetriCal help you quantify the quality of your calibration data and results.