# Tangram Vision Documentation > Use MetriCal for accurate, precise, and expedient calibration of multimodal sensor suites ## metrical ### calibration_guides - [IMU Data Capture](/metrical/calibration_guides/camera_imu_cal.md): This file explains IMU calibration with MetriCal, which requires no specific targets since IMUs measure forces and velocities directly, and must be performed alongside camera calibration. - [Camera ↔ LiDAR Calibration](/metrical/calibration_guides/camera_lidar_cal.md): This file provides a comprehensive guide for calibrating camera-LiDAR sensor pairs using MetriCal, including both combined and staged calibration approaches with circular targets. - [Calibration Guide Overview](/metrical/calibration_guides/guide_overview.md): This file provides general calibration principles and guidelines that apply to all MetriCal calibration scenarios, including target selection, data quality considerations, and sensor-specific guidance. - [LiDAR ↔ LiDAR Calibration](/metrical/calibration_guides/lidar_lidar_cal.md): This file provides a guide for calibrating multiple LiDAR sensors together using MetriCal with circle targets and includes a practical example dataset. - [Multi-Camera Calibration](/metrical/calibration_guides/multi_camera_cal.md): This file provides guidance for multi-camera calibration in MetriCal, covering both combined and staged approaches for calibrating camera intrinsics and extrinsics simultaneously. - [Single Camera Calibration](/metrical/calibration_guides/single_camera_cal.md): This file provides tips and best practices for single camera calibration using MetriCal, including basic workflows and techniques for seeding larger multi-stage calibrations. ### calibration_models - [Camera Models](/metrical/calibration_models/cameras.md): This file documents all supported camera intrinsics models in MetriCal, including pinhole, distortion models, and fisheye lenses with their respective parameters and mathematical descriptions. - [IMU Models](/metrical/calibration_models/imu.md): This file documents all supported IMU intrinsics models in MetriCal for calibrating accelerometer and gyroscope measurements with their mathematical modeling and correction processes. - [LiDAR Models](/metrical/calibration_models/lidar.md): This file documents the supported LiDAR models in MetriCal, which currently focuses on extrinsics calibration since LiDAR intrinsics are typically reliable from the factory. ### changelog This file contains the comprehensive release notes and changelogs for MetriCal versions, documenting new features, bug fixes, breaking changes, and version-specific improvements. - [Releases + Changelogs](/metrical/changelog.md): This file contains the comprehensive release notes and changelogs for MetriCal versions, documenting new features, bug fixes, breaking changes, and version-specific improvements. ### commands - [Calibrate Mode](/metrical/commands/calibrate.md): This file documents the MetriCal calibrate command, which performs full bundle adjustment calibration of multi-modal sensor systems using pre-recorded data, plex configuration, and object space definitions. - [Errors & Troubleshooting](/metrical/commands/command_errors.md): This file catalogs MetriCal's error codes and exit codes with descriptions and troubleshooting steps for various operational failures and system issues. - [MetriCal Commands](/metrical/commands/commands_overview.md): This file provides an overview of all MetriCal commands including init, calibrate, report, display, shape, and their universal options and usage patterns. - [Completion Mode](/metrical/commands/completion.md): This file describes how to generate shell auto-completion scripts for MetriCal across different shell environments including bash, fish, elvish, powershell, and zsh. - [Consolidate Object Spaces Mode](/metrical/commands/consolidate_object_spaces.md): This file documents the MetriCal consolidate object spaces command, which combines multiple object spaces into a single unified configuration using object relative extrinsics. - [Display Mode](/metrical/commands/display.md): This file documents the MetriCal display command, which visualizes calibration results applied to datasets using Rerun for ocular validation of calibration quality. - [Init Mode](/metrical/commands/init.md): This file documents the MetriCal init command, which creates uncalibrated plex files from datasets by inferring components, spatial constraints, and temporal constraints. - [Pipeline Mode](/metrical/commands/pipeline.md): This file documents the deprecated MetriCal pipeline command, which allowed running series of MetriCal commands in sequence from JSON configuration files. - [Report Mode](/metrical/commands/report.md): This file documents the MetriCal report command, which generates comprehensive calibration reports from plex files or calibration output files in human-readable formats. - [Focus](/metrical/commands/shape/shape_focus.md): This file documents the MetriCal shape focus command, which creates hub-and-spoke plex configurations where all components are spatially connected to one focus component. - [Lookup Table](/metrical/commands/shape/shape_lut.md): This file documents the MetriCal shape LUT command, which creates pixel-wise lookup tables for single camera distortion correction and image undistortion. - [Stereo Lookup Table](/metrical/commands/shape/shape_lut_stereo.md): This file documents the MetriCal shape stereo LUT command, which creates pixel-wise lookup tables for stereo camera pair rectification and stereo vision applications. - [Minimum Spanning Tree](/metrical/commands/shape/shape_mst.md): This file documents the MetriCal shape MST command, which creates a plex containing only the minimum spanning tree of spatial components with optimal covariance connections. - [Shape Mode](/metrical/commands/shape/shape_overview.md): This file provides an overview of MetriCal's shape command, which transforms plex files into various useful output formats for practical deployment and system integration. - [Tabular](/metrical/commands/shape/shape_tabular.md): This file documents the MetriCal shape tabular command, which exports intrinsic and extrinsic calibration data from plex files in simplified table-based CSV formats. - [URDF](/metrical/commands/shape/shape_urdf.md): This file documents the MetriCal shape URDF command, which creates ROS-compatible URDF files from plex configurations for robotic system integration. ### configuration - [Valid Data Formats](/metrical/configuration/data_formats.md): This file documents the supported data formats in MetriCal including MCAP files, ROS bags, folder datasets, and their respective message encodings and requirements. - [Group Management](/metrical/configuration/groups.md): This file explains group management in the Tangram Vision Hub, including how to name groups, add members, assign roles, and manage organizational permissions for MetriCal usage. - [Installation](/metrical/configuration/installation.md): This file provides installation instructions for MetriCal via apt repository for Ubuntu/Pop!_OS systems and via Docker for other operating systems. - [License Creation](/metrical/configuration/license_creation.md): This file explains how to create personal and group licenses for MetriCal through the Tangram Vision Hub, including license naming, creation, and revocation procedures. - [License Usage](/metrical/configuration/license_usage.md): This file provides comprehensive instructions for using MetriCal license keys, including command line arguments, environment variables, configuration files, and offline licensing options. - [Visualization](/metrical/configuration/visualization.md): This file explains how to set up and use MetriCal's visualization features with Rerun for visual inspection of calibration data, detections, and spatial alignment verification. ### core_concepts - [Components](/metrical/core_concepts/components.md): This file defines components as atomic sensing units in MetriCal's Plex system, detailing camera, LiDAR, and IMU component types with their intrinsic parameters and modeling approaches. - [Constraints](/metrical/core_concepts/constraints.md): This file defines constraints as spatial, temporal, or semantic relations between components in MetriCal's Plex system, explaining directional conventions and extrinsic transformations. - [Covariance](/metrical/core_concepts/covariance.md): This file explains covariance as a measure of uncertainty in MetriCal's Plex system, differentiating it from binary fixed/variable parameters and providing statistical modeling capabilities. - [Object Space](/metrical/core_concepts/object_space_overview.md): This file explains MetriCal's Object Space concept, which refers to known sets of features in the environment used for cross-modality calibration and establishing metric scale. - [Plex](/metrical/core_concepts/plex_overview.md): This file explains MetriCal's Plex concept, which represents the spatial, temporal, and semantic relationships within perception systems as a graph of components and constraints. - [Projective Compensation](/metrical/core_concepts/projective_compensation.md): This file explains projective compensation, a phenomenon where errors in one calibration parameter affect other parameters due to correlation in observations or poor model choices. ### intro This file introduces MetriCal, a tool for accurate and precise calibration of multimodal sensor suites with support for cameras, LiDAR, IMU sensors, and various calibration targets. - [Welcome to MetriCal](/metrical/intro.md): This file introduces MetriCal, a tool for accurate and precise calibration of multimodal sensor suites with support for cameras, LiDAR, IMU sensors, and various calibration targets. ### results - [MetriCal JSON Output](/metrical/results/output_file.md): This file documents MetriCal's comprehensive JSON output format containing optimized plex data, object space information, and detailed calibration metrics and residuals. - [MetriCal Reports](/metrical/results/report.md): This file explains MetriCal's comprehensive calibration reports, including charts, diagnostics, metrics interpretation, and HTML report generation for calibration quality assessment. - [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md): This file explains circle misalignment metrics unique to MetriCal, which measure the bridging between camera 2D features and LiDAR 3D point clouds using circular calibration targets. - [Composed Relative Extrinsics](/metrical/results/residual_metrics/composed_relative_extrinsics.md): This file documents composed relative extrinsics error metrics in MetriCal, which measure consistency between relative extrinsics formed through object spaces and direct component transforms. - [Image Reprojection](/metrical/results/residual_metrics/image_reprojection.md): This file explains image reprojection error metrics in MetriCal, which measure the precision of camera calibration by comparing feature positions in images to their object space counterparts. - [IMU PreIntegration Error](/metrical/results/residual_metrics/imu_preintegration_error.md): This file documents IMU preintegration error metrics in MetriCal, which measure the consistency of IMU measurements across preintegration windows defined by navigation states. - [Interior Points to Plane Error](/metrical/results/residual_metrics/interior_points_to_plane_error.md): This file explains interior points to plane error metrics in MetriCal, which measure the fit between LiDAR points detected on circular target surfaces and the actual target plane. - [Object Inertial Extrinsics Error](/metrical/results/residual_metrics/object_inertial_extrinsic_error.md): This file explains object inertial extrinsics error metrics in MetriCal, which measure errors between sequences of measured and optimized extrinsics involving IMU navigation states. - [Paired 3D Point Error](/metrical/results/residual_metrics/paired_3d_point_error.md): This file documents paired 3D point error metrics in MetriCal, which measure LiDAR alignment quality by comparing detected circle centers between different LiDAR frames of reference. - [Paired Plane Normal Error](/metrical/results/residual_metrics/paired_plane_normal_error.md): This file documents paired plane normal error metrics in MetriCal, which measure LiDAR alignment quality by comparing detected plane normals of circle targets between LiDAR frames of reference. ### special_topics - [Calibrate RealSense Sensors](/metrical/special_topics/calibrate_realsense.md): This file provides a tutorial for calibrating multiple Intel RealSense sensors simultaneously using MetriCal, including data recording procedures and calibration flashing. - [Migrate from Kalibr to MetriCal](/metrical/special_topics/kalibr_to_metrical_migration.md): This file provides a comprehensive guide for migrating calibration workflows from Kalibr to MetriCal, highlighting operational differences and improved capabilities. - [Experiment with Different Intrinsics Models](/metrical/special_topics/test_different_intrinsics.md): This file demonstrates how to quickly experiment with different camera intrinsics models using the same dataset in MetriCal, utilizing cached detections for faster iterations. ### support_and_admin - [Billing](/metrical/support_and_admin/billing.md): This file explains how to create, cancel, and manage MetriCal subscriptions, payment methods, and billing contact information through the Tangram Vision Hub. - [Contact Us](/metrical/support_and_admin/contact.md): This file provides contact information for MetriCal user support, partnering inquiries, and citation guidelines for academic research. - [Legal Information](/metrical/support_and_admin/legal.md): This file contains legal disclaimers, confidentiality notices, and comprehensive third-party license information for the Tangram Vision Platform and MetriCal software. ### targets - [Multiple Targets](/metrical/targets/multiple_targets.md): This file explains how to use multiple calibration targets simultaneously in MetriCal, including object space configuration, UUID generation, and target positioning for complex setups. - [Target Construction Guide](/metrical/targets/target_construction.md): This file provides comprehensive guidance on constructing calibration targets for MetriCal, including target selection, printing considerations, and quality control measures. - [Supported Targets](/metrical/targets/target_overview.md): This file documents all supported calibration targets in MetriCal including AprilGrid, markerboards, checkerboards, and LiDAR circle targets with their specifications and usage guidelines. --- # Full Documentation Content # IMU Data Capture Unlike other modalities supported by MetriCal, IMU calibrations do not require any specific object space or target, as these types of components (accelerometers and gyroscopes) measure forces and rotational velocities directly. What IMUs do require, however, is that they are calibrated alongside one or more cameras. The advice herein assumes that your dataset will be configured to perform a camera ↔ IMU calibration. ## Video Tutorial[​](#video-tutorial "Direct link to Video Tutorial") ## Practical Example[​](#practical-example "Direct link to Practical Example") We've captured an example of a good IMU ↔ Camera calibration dataset that you can use to test out MetriCal. If it's your first time performing an IMU calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. ### Running the Example Dataset[​](#running-the-example-dataset "Direct link to Running the Example Dataset") First, [download the dataset here](https://drive.google.com/file/d/1DKciDOhJtqMHWh22OHGS7c_hL4K4nIQC/view?usp=drive_link) and unzip it somewhere on your computer. Then, copy the following bash script into the dataset directory: warning This script assumes that you have either installed the apt version of MetriCal, or that you are using the docker version with an alias set to `metrical`. For more information, please review the MetriCal [installation instructions.](/metrical/configuration/installation.md) ``` #!/usr/bin/env bash set -e DATA=camera_imu_box3.mcap PLEX=plex.json RESULTS=results.json REPORT=report.html OBJ=object.json metrical init -y \ -m /rs/color/image_raw:opencv_radtan \ -m /rs/imu:scale_shear_rotation \ $DATA $PLEX metrical calibrate \ --render \ --camera-motion-threshold 4.0 \ -o $RESULTS \ --report-path $REPORT \ $DATA $PLEX $OBJ ``` Before running the script, let's take note of a couple things: * The `metrical init` command uses the `-m` flag to describe the system being calibrated. It indicates that the dataset topic `/rs/color/image_raw` represents a camera that needs `opencv_radtan` intrinsics generated, and that the `/rs/imu` topic is an IMU that that needs its scale, shear, and rotation calibrated. The configuration generated by `metrical init` is saved to a file named `plex.json`, which will be used during the calibration step to configure the system. You can learn more about plexes [here.](/metrical/core_concepts/plex_overview.md) * `metrical calibrate` is being passed a `--camera-motion-threshold` value to help deal with some of the intense motion of the dataset. This is described [above.](#excite-all-imu-axes-during-capture) Finally, note the `--render` flag being passed to `metrical calibrate`. This flag will allow us to watch the detection phase of the calibration as it happens in realtime. This can have a large impact on performance, but is invaluable for debugging data quality issues. Rendering has been enabled here so that you can watch the dataset as it's being processed. warning MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this script. You should now be ready to run the script. When you start it, it will display a visualization window like the following with both IMU graphs and detections overlaid on the camera frames: ![A screenshot of the MetriCal detection visualization](/assets/images/camera_imu_visualization-4a246d6b29385a1513aa30664a8ef1cd.png) While the calibration is running, take specific note of the frequency and magnitude of the sensor motion, as well as the still periods following periods of motion. When it comes time to capturing your own data, try to replicate these motion patterns to the best of your ability. When the script finishes, you'll be left with three artifacts: * `plex.json` - as described in the prior section. * `report.html` - a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.json` - a file containing the final calibration and various other metrics. You can learn more about results json files [here](/metrical/results/output_file.md) and about manimulating your results using `shape` commands [here](/metrical/commands/shape/shape_overview.md). And that's it! Hopefully this trial run will have given you a better understanding of how to capture your own IMU ↔ Camera calibration. ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") ### Maximize View of Object Spaces for Duration of Capture[​](#maximize-view-of-object-spaces-for-duration-of-capture "Direct link to Maximize View of Object Spaces for Duration of Capture") IMU calibrations are structured such that MetriCal jointly solves for the first order gyroscope and accelerometer biases in addition to solving for the relative IMU-from-camera extrinsic. This is done by comparing the world pose (or world extrinsic) of the camera between frames to the expected motion that is measured by the accelerometer and gyroscope of the IMU. Because of how this problem is posed, the best way to produce consistent, precise IMU ↔ camera calibrations is to maximize the visibility of one or more targets in the object space from one of the cameras being calibrated alongside the IMU. Put in a different way: avoid capturing sections of data where the IMU is recording but where no object space or target can be seen from any camera. Doing so can lead to misleading bias estimates. ### Excite All IMU Axes During Capture[​](#excite-all-imu-axes-during-capture "Direct link to Excite All IMU Axes During Capture") IMU calibrations are no different than any other modality in how they are entirely a data-driven process. In particular, the underlying data needs to demonstrate observed translational and rotational motions in order for MetriCal to understand the motion path that the IMU has followed. This is what is meant by "exciting" an IMU: accelerations and rotational velocities must be observable in the data (different enough from the underlying noise in the measurements) so as to be separately observable from e.g. the biases. This means that when capturing data to calibrate between an IMU and one or more cameras, it is important to move the IMU translationally across all accelerometer axes, and rotationally about all gyroscope axes. This motion can be repetitive so long as a sufficient magnitude of motion has been achieved. It is difficult to describe what that magnitude is as that magnitude is dependent on what kind of IMU is being calibrated (e.g. MEMS IMU, how large it is, what kinds of noise it measures, etc.). success We suggest alternating between periods of "excitement" or motion with the IMU and holding still so that the camera(s) can accurately and precisely measure the given object space. If you find yourself still having trouble getting a sufficient number of observations to produce reliable calibrations, we suggest bumping up the threshold for our motion filter heuristic up when calling `metrical calibrate`. This is controlled by the `--camera-motion-threshold` flag. A value of 3.0 through 5.0 can sometimes improve the quality of the calibration a significant amount. ### Reduce Motion Blur in Camera Images[​](#reduce-motion-blur-in-camera-images "Direct link to Reduce Motion Blur in Camera Images") This advice holds for both multi-camera and IMU ↔ camera calibrations. It is often advisable to reduce the effects of motion in the images to produce more crisp, detailed images to calibrate against. Some ways to do this are to: 1. Always use a global shutter camera 2. Reduce the overall exposure time of the camera 3. Avoid over-exciting IMU motion, and don't be scared to slow down a little if you find you can't detect the object space much if at all. danger MetriCal currently does not perform well with IMU ↔ camera calibrations if the camera is a rolling shutter camera. We advise calibrating with a global shutter camera whenever possible. --- # Camera ↔ LiDAR Calibration LiDAR Circle Target All of MetriCal's LiDAR calibrations require a [circle target](/metrical/targets/target_overview.md) to be used in the object space. ## Video Tutorial[​](#video-tutorial "Direct link to Video Tutorial") ## MetriCal Workflows[​](#metrical-workflows "Direct link to MetriCal Workflows") MetriCal offers two approaches to Camera-LiDAR calibration: combined (all-at-once) and staged. The best approach depends on your specific hardware setup and calibration requirements. ### Combined Calibration Approach[​](#combined-calibration-approach "Direct link to Combined Calibration Approach") The combined approach calibrates camera intrinsics and Camera-LiDAR extrinsics simultaneously, which typically yields the most accurate results: ``` # Initialize the calibration metrical init -m camera*:eucm -m lidar*:lidar $INIT_PLEX # Run the calibration metrical calibrate --render -o $OUTPUT $DATA $INIT_PLEX $OBJ ``` Parameters: * `camera*:eucm`: Assigns the extended unified camera model to all topics named "camera\*" * `lidar*:lidar`: Assigns the LiDAR model to all topics named "lidar\*" * `--render`: Enables visualization * `-o`: Specifies output file location This approach works best when: * Your LiDAR and camera(s) can simultaneously view the calibration target * The circle target can be positioned to fill the camera's field of view * You have good control over the target positioning ### Staged Calibration Approach for Complex Camera Rigs[​](#staged-calibration-approach-for-complex-camera-rigs "Direct link to Staged Calibration Approach for Complex Camera Rigs") For complex rigs where optimal camera calibration and optimal LiDAR-camera calibration require different setups, a staged approach is recommended: ``` # Step 1: Calibrate camera intrinsics first CAMERA_DATA=/path/to/camera_only.mcap # Dataset focused on optimal camera intrinsics CAMERA_RESULTS=/path/to/camera_cal.json metrical init -m /camera:eucm $CAMERA_INIT_PLEX metrical calibrate -o $CAMERA_RESULTS $CAMERA_DATA $CAMERA_INIT_PLEX $OBJ # Step 2: Use camera intrinsics to seed a Camera-LiDAR calibration LIDAR_CAM_DATA=/path/to/lidar_camera.mcap # Dataset focused on Camera-LiDAR extrinsics FINAL_RESULTS=/path/to/final_cal.json metrical init -p $CAMERA_RESULTS -m /camera:eucm -m /lidar:lidar $LIDAR_CAM_INIT_PLEX metrical calibrate -o $FINAL_RESULTS $LIDAR_CAM_DATA $LIDAR_CAM_INIT_PLEX $OBJ ``` This staged approach allows you to: 1. Capture optimal data for camera intrinsics (getting close to ensure full FOV coverage) 2. Use a separate dataset focused on good LiDAR-camera observations for extrinsics 3. Lock in the camera intrinsics during the second calibration stage The staged approach is particularly useful when: * Your camera is mounted in a hard-to-reach position * You need different viewing distances for optimal camera calibration vs. LiDAR-camera calibration * You're calibrating a complex rig with multiple sensors For details on camera-only calibration, see the [Single Camera Calibration guide](/metrical/calibration_guides/single_camera_cal.md) and the [Multi-Camera Calibration guide](/metrical/calibration_guides/multi_camera_cal.md). ## Practical Example[​](#practical-example "Direct link to Practical Example") We've captured an example of a good LiDAR ↔ Camera calibration dataset that you can use to test out MetriCal. If it's your first time performing a LiDAR calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. ### Running the Example Dataset[​](#running-the-example-dataset "Direct link to Running the Example Dataset") First, [download the dataset here](https://drive.google.com/file/d/1rog7F2SF7fXlWoROm746lxJHnyP_c7eV/view?usp=drive_link) and unzip it somewhere on your computer. Then, copy the following bash script into the dataset directory: warning This script assumes that you have either installed the apt version of MetriCal, or that you are using the docker version with an alias set to `metrical`. For more information, please review the MetriCal [installation instructions.](/metrical/configuration/installation.md) ``` #!/bin/bash -i DATA=./camera_lidar_capture.mcap INIT_PLEX=init_plex.json OBJ=obj_circle.json OUTPUT=results.json REPORT=results.html metrical init \ -y \ -m */image_rect_raw:no_distortion \ -m /velodyne*:lidar \ $DATA $INIT_PLEX metrical calibrate \ --render \ --disable-motion-filter \ --report-path $REPORT \ -o $OUTPUT $DATA $INIT_PLEX $OBJ ``` The `metrical init` command uses the `-m` flag to describe the system being calibrated. It indicates that any topics ending in `/image_rect_raw` are from cameras with already rectified images (`no_distortion`), and that lidar topic name starts with `/velodyne`. We can use the `*` glob character to capture multiple topics, or to just prevent extra typing when telling MetriCal which topics to use. The configuration generated by `metrical init` is saved to a file named `plex.json`, which will be used during the calibration step to configure the system. You can learn more about plexes [here.](/metrical/core_concepts/plex_overview.md) Finally, note the `--render` flag being passed to `metrical calibrate`. This flag will allow us to watch the detection phase of the calibration as it happens in realtime. This can have a large impact on performance, but is invaluable for debugging data quality issues. Rendering has been enabled here so that you can watch the dataset as it's being processed. warning MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this script. You should now be ready to run the script. When you start it, it will display a visualization window like the following with a LiDAR point cloud and detections overlaid on the camera frames. Note that the LiDAR circle detections will show up as red points in the point cloud. ![A screenshot of the MetriCal detection visualization](/assets/images/camera_lidar_visualization-ac3e6f16bea2795d62fc1bdb61b84935.png) While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of camera coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture. When the script finishes, you'll be left with three artifacts: * `plex.json` - as described in the prior section. * `report.html` - a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.json` - a file containing the final calibration and various other metrics. You can learn more about results json files [here](/metrical/results/output_file.md) and about manimulating your results using `shape` commands [here](/metrical/commands/shape/shape_overview.md). ### Residual Metrics[​](#residual-metrics "Direct link to Residual Metrics") For camera-LiDAR calibration, MetriCal outputs two key metrics: 1. **Circle misalignment RMSE**: Indicates how well the center of the markerboard (detected in camera space) aligns with the geometric center of the retroreflective circle (in LiDAR space). Typically, values around 3cm or less indicate a good calibration. 2. **Interior point-to-plane RMSE** (if `detect_interior_points` was enabled): Measures how well the interior points of the circle align with the plane of the circle. Values around 1cm or less are considered good. ![Camera-LiDAR residual metrics](/assets/images/lidar_summary_stats_demo-fa59cfce9ffebefe8e10e9a5c9649a93.png) ### Visualizing Results[​](#visualizing-results "Direct link to Visualizing Results") You can focus on LiDAR detections by double-clicking on `detections` in the Rerun interface ![LiDAR detections](/assets/images/livox_detections-6ad14bae5cf8019dda871a58e9116ce8.gif) To see the aligned camera-LiDAR view, double-click on a camera in the `corrections` space: ![Camera-LIVOX alignment](/assets/images/livox_demo-0ec26f87d62899a511ff4cf580df0b5e.gif) If the calibration quality doesn't meet your requirements, consider recapturing data at a slower pace or modifying the target with more/less retroreflectivity. ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") LiDAR ↔ Camera calibrations are currently an extrinsics-only calibration. Note that while we only solve for the extrinsics between the LiDAR and the camera components, the calibration is still a joint calibration as camera *intrinsics* are still estimated as well. ### Best Practices[​](#best-practices "Direct link to Best Practices") | DO | DON'T | | ----------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | | ✅ Vary distance from sensors to target a few times to capture a wider range of readings. | ❌ Stand at a single distance from the sensors. | | ✅ Make sure some of your captures fill much of the camera field of view. | ❌ Only capture poses where the target does not capture much of the cameras' fields of view. | | ✅ Rotate the target on its planar axis periodically to capture poses of the board at different orientations. | ❌ Keep the board in a single orientation for every pose. | | ✅ Maximize convergence angles by occasionally angling the target up/down and left/right during poses. | ❌ Keep the target in a single orientation, thereby minimizing convergence angles. | | ✅ Pause in between poses to eliminate motion blur effects. | ❌ Move the target constantly or rapidly during data capture. | | ✅ Bring the target close to each camera lens and capture target poses across each camera's entire field of view. | ❌ Stay distant from the cameras or only capture poses that cover part of each camera's field of view. | | ✅ Achieve convergence angles of 35° or more on each axis. | | ### Using The Circle Target[​](#using-the-circle-target "Direct link to Using The Circle Target") MetriCal uses a special target called a Lidar Circle target, or simply "the circle". The circle consists of two parts: 1. A markerboard cut into a circle 2. Retroreflective tape around the edge of the circle Its design allows MetriCal to bridge the modalities of camera (a projection of Euclidean space onto a plane) and LiDAR (a representation of Euclidean space). ![Target: Circle Target](/assets/images/circle_plane_construction-bc071c3a36cdcef10afa5503722bffaa.png) Since we're using a markerboard to associate the camera with a position in space, all the same data collection considerations for cameras apply: * Rotate the circle * Capture several angles * Capture the circle at different distances Following camera data collection best practices will also guarantee good data for LiDAR calibration. LiDAR Intrinsics MetriCal doesn't currently calibrate LiDAR intrinsics. If you're interested in calibrating the intrinsics of your LiDAR for better accuracy and precision, get in touch! ### Maximize Variation Across All 6 Degrees Of Freedom[​](#maximize-variation-across-all-6-degrees-of-freedom "Direct link to Maximize Variation Across All 6 Degrees Of Freedom") When capturing the circular target with your camera and LiDAR components, ensure that the target can be seen from a variety of poses by varying: * **Roll, pitch, and yaw rotations** of the target relative to the sensors * **X, Y, and Z translations** between poses MetriCal performs LiDAR-based calibrations best when the target can be seen from a range of different depths. Try moving forward and away from the target in addition to moving laterally relative to the sensors. ### Pause Between Different Poses[​](#pause-between-different-poses "Direct link to Pause Between Different Poses") MetriCal performs more consistently if there is less motion blur or motion-based artifacts in the captured data. For the best calibrations, pause for 1-2 seconds after every pose of the board. Constant motion in a dataset will typically yield poor results. ## Advanced Considerations[​](#advanced-considerations "Direct link to Advanced Considerations") ### Beware of LiDAR Noise[​](#beware-of-lidar-noise "Direct link to Beware of LiDAR Noise") Some LiDAR sensors can have significant noise when detecting retroreflective surfaces. This can cause a warping effect in the point cloud data, where points are spread out in a way that makes it difficult to detect the true surface of the circle. ![Warping in LiDAR Points from reflectance](/assets/images/lidar_bloom-2c22e69d8496d694691b9e83c5483dce.png) For Ouster LiDARs, this is caused by the retroreflective material saturating the photodiodes and affecting the time-of-flight estimation. To prevent this warping effect, you can lower the signal strength of the emitted beam by sending a `POST` request and modifying the `signal_multiplier` to 0.25 as shown in the [Ouster documentation](https://static.ouster.dev/sensor-docs/image_route1/image_route2/common_sections/API/http-api-v1.html#post-api-v1-sensor-config-multiple-configuration-parameters). ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors](/metrical/commands/command_errors.md) documentation. Common issues with Camera ↔ LiDAR calibration include: * No features detected (cal-calibrate-001) * Too many component observations filtered (cal-calibrate-006) * No compatible component type to register against (cal-calibrate-007) Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data. --- # Calibration Guide Overview MetriCal's calibration is a joint process, which includes calibrating intrinsics and extrinsics at the same time. This overview provides general guidelines for calibration before diving into the specific calibration guides. ## General Calibration Principles[​](#general-calibration-principles "Direct link to General Calibration Principles") The following principles apply to all camera calibration scenarios: ### Target Selection[​](#target-selection "Direct link to Target Selection") MetriCal supports various calibration targets: * Checkerboard patterns * Charuco boards * AprilTag grids * Circular dot patterns (also used for LiDAR calibration) Choose a target appropriate for your camera setup and ensure its measurements are precise and specified in meters. ### Valid Data Formats[​](#valid-data-formats "Direct link to Valid Data Formats") MetriCal takes a number of different input data types, but there are some special considerations that every user should know of. Reference the [Valid Data Formats](/metrical/configuration/data_formats.md) documentation for more details. ### Data Quality Considerations[​](#data-quality-considerations "Direct link to Data Quality Considerations") For all camera calibrations, regardless of how many cameras you're calibrating: 1. **Image Focus**: Always ensure targets are in focus 2. **Field of View Coverage**: Capture the target across the entire field of view 3. **Target Orientations**: When using multiple targets, rotate each one a different orientation 4. **Convergence Angles**: Vary the viewing angle to improve parameter estimation 5. **Motion Clarity**: Avoid motion blur by pausing between poses 6. **Depth Variation**: When possible, use non-planar targets or multiple targets at different depths ## Sensor Calibration Guides[​](#sensor-calibration-guides "Direct link to Sensor Calibration Guides") Good sensor calibration is key to building accurate perception systems. MetriCal helps you calibrate many types of sensors—from a single camera to complex multi-sensor setups. Our guides take you through each step of the process, from collecting data to checking your results. We cover single cameras, multiple cameras, and camera-LiDAR combinations with clear instructions and practical tips. Camera ↔ LiDAR ↔ IMU calibrations are also possible in MetriCal. These multi-modal calibrations follow all the same guidelines as their more specialized guides. Choose the guide below that matches your setup to get started. [![Single Camera Calibration](/assets/images/guide_single_camera-84daf5456cf83011e4bb31d0e2a90d02.png)](/metrical/calibration_guides/single_camera_cal.md) ### [Single Camera Calibration](/metrical/calibration_guides/single_camera_cal.md) [Calibrate intrinsics for a single camera](/metrical/calibration_guides/single_camera_cal.md) [View Guide →](/metrical/calibration_guides/single_camera_cal.md) [![Multi-Camera Calibration](/assets/images/guide_multi_camera-5dcd4150997955a8550369ad7ff344b0.png)](/metrical/calibration_guides/multi_camera_cal.md) ### [Multi-Camera Calibration](/metrical/calibration_guides/multi_camera_cal.md) [Calibrate multiple cameras together](/metrical/calibration_guides/multi_camera_cal.md) [View Guide →](/metrical/calibration_guides/multi_camera_cal.md) [![Camera ↔ LiDAR Calibration](/assets/images/guide_camera_lidar-8c819ec541379ef39cd30091d9f2c5da.png)](/metrical/calibration_guides/camera_lidar_cal.md) ### [Camera ↔ LiDAR Calibration](/metrical/calibration_guides/camera_lidar_cal.md) [Calibrate cameras with LiDAR sensors](/metrical/calibration_guides/camera_lidar_cal.md) [View Guide →](/metrical/calibration_guides/camera_lidar_cal.md) [![Camera ↔ IMU Calibration](/assets/images/guide_camera_imu-dc8e90a97865eebc696d151c26d3ae00.png)](/metrical/calibration_guides/camera_imu_cal.md) ### [Camera ↔ IMU Calibration](/metrical/calibration_guides/camera_imu_cal.md) [Calibrate cameras with IMU sensors](/metrical/calibration_guides/camera_imu_cal.md) [View Guide →](/metrical/calibration_guides/camera_imu_cal.md) [![LiDAR ↔ LiDAR Calibration](/assets/images/guide_lidar_lidar-bb8ff0a32d3b043aac0698339fad7445.png)](/metrical/calibration_guides/lidar_lidar_cal.md) ### [LiDAR ↔ LiDAR Calibration](/metrical/calibration_guides/lidar_lidar_cal.md) [Calibrate multiple LiDAR sensors](/metrical/calibration_guides/lidar_lidar_cal.md) [View Guide →](/metrical/calibration_guides/lidar_lidar_cal.md) ## Calibration Workflow[​](#calibration-workflow "Direct link to Calibration Workflow") 1. **Setup**: Prepare your sensor(s) and calibration target(s) 2. **Data Capture**: Follow the guidelines in the specific calibration guide 3. **Processing**: Run MetriCal on the captured data 4. **Validation**: Review the [calibration results](/metrical/results/report.md) and [data diagnostics](/metrical/results/report.md#data-diagnostics) 5. **Refinement**: If needed, capture additional data to improve results For detailed instructions on each calibration type, refer to the specific guides linked above. --- # LiDAR ↔ LiDAR Calibration LiDAR Circle Target All of MetriCal's LiDAR calibrations require a [circle target](/metrical/targets/target_overview.md) to be used in the object space. ## Practical Example[​](#practical-example "Direct link to Practical Example") We've captured an example of a good LiDAR ↔ LiDAR calibration dataset that you can use to test out MetriCal. If it's your first time performing a LiDAR calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. ### Running the Example Dataset[​](#running-the-example-dataset "Direct link to Running the Example Dataset") First, [download the dataset here](https://drive.google.com/file/d/1_BaGJ-NjpBJHEuH1c-NJmG2VHocvj4kv/view?usp=drive_link) and unzip it somewhere on your computer. Then, copy the following bash script into the dataset directory: warning This script assumes that you have either installed the apt version of MetriCal, or that you are using the docker version with an alias set to `metrical`. For more information, please review the MetriCal [installation instructions.](/metrical/configuration/installation.md) ``` #!/bin/bash -i DATA=./lidar_lidar2.mcap INIT_PLEX=init_plex.json OBJ=object.json OUTPUT=results.json REPORT=results.html # Front cameras metrical init \ -y \ -m *livox*:lidar \ -m *points*:lidar \ $DATA $INIT_PLEX metrical calibrate \ --render \ --report-path $REPORT \ -o $OUTPUT $DATA $INIT_PLEX $OBJ ``` The `metrical init` command uses the `-m` flag to describe the system being calibrated. It indicates that any topics containing the words `livox` or `points` are our lidar topics. We can use the `*` glob character to capture multiple topics, or to just prevent extra typing when telling MetriCal which topics to use. The configuration generated by `metrical init` is saved to a file named `plex.json`, which will be used during the calibration step to configure the system. You can learn more about plexes [here.](/metrical/core_concepts/plex_overview.md) Finally, note the `--render` flag being passed to `metrical calibrate`. This flag will allow us to watch the detection phase of the calibration as it happens in realtime. This can have a large impact on performance, but is invaluable for debugging data quality issues. Rendering has been enabled here so that you can watch the dataset as it's being processed. warning MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this script. You should now be ready to run the script. When you start it, it will display a visualization window like the following with two LiDAR point clouds in the same coordinate frame (but not registered to one another yet). Note that the LiDAR circle detections will show up as red points in the point cloud. ![A screenshot of the MetriCal detection visualization](/assets/images/lidar_lidar_visualization-ba494a8de737c0615f8dc328e6d5b077.png) While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture. When the script finishes, you'll be left with three artifacts: * `plex.json` - as described in the prior section. * `report.html` - a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.json` - a file containing the final calibration and various other metrics. You can learn more about results json files [here](/metrical/results/output_file.md) and about manimulating your results using `shape` commands [here](/metrical/commands/shape/shape_overview.md). And that's it! Hopefully this trial run will have given you a better understanding of how to capture your own LiDAR ↔ LiDAR calibration. ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") Many of the tips for LiDAR ↔ LiDAR data capture are similar to [Camera ↔ LiDAR capture](/metrical/calibration_guides/camera_lidar_cal.md). Below we've outlined best practices for capturing a dataset that will consistently produce a high-quality calibration between two LiDAR components. ### Best Practices[​](#best-practices "Direct link to Best Practices") | DO | DON'T | | ----------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | | ✅ Vary distance from sensors to target to capture a wider range of readings. | ❌ Stand at a single distance from the sensors. | | ✅ Ensure good point density on the target for all LiDAR sensors. | ❌ Only capture poses where the target does not have sufficient point density. | | ✅ Rotate the target on its planar axis to capture poses at different orientations. | ❌ Keep the board in a single orientation for every pose. | | ✅ Maximize convergence angles by angling the target up/down and left/right during poses. | ❌ Keep the target in a single orientation, minimizing convergence angles. | | ✅ Pause in between poses (1-2 seconds) to eliminate motion artifacts. | ❌ Move the target constantly or rapidly during data capture. | | ✅ Achieve convergence angles of 35° or more on each axis when possible. | | ### Maximize Variation Across All 6 Degrees Of Freedom[​](#maximize-variation-across-all-6-degrees-of-freedom "Direct link to Maximize Variation Across All 6 Degrees Of Freedom") When capturing the circular target with multiple LiDAR components, ensure that the target can be seen from a variety of poses by varying: * **Roll, pitch, and yaw rotations** of the target relative to the LiDAR sensors * **X, Y, and Z translations** between poses MetriCal performs LiDAR-based calibrations best when the target can be seen from a range of different depths. Try moving forward and away from the target in addition to moving laterally relative to the sensors. The field of view of a LiDAR component is often much greater than a camera, so be sure to capture data that fills this larger field of view. ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors](/metrical/commands/command_errors.md) documentation. Common issues with LiDAR ↔ LiDAR calibration include: * No features detected (cal-calibrate-001) * Too many component observations filtered (cal-calibrate-006) * Compatible component type has no detections (cal-calibrate-008) Remember that when working with multiple LiDAR sensors, it's important to ensure all sensors can clearly see the calibration target throughout the data capture process. --- # Multi-Camera Calibration MetriCal's multi-camera calibration is a joint process, which includes calibrating both intrinsics and extrinsics simultaneously. This guide provides specific tips for calibrating multiple cameras together. ## Video Tutorial[​](#video-tutorial "Direct link to Video Tutorial") ## MetriCal Workflows[​](#metrical-workflows "Direct link to MetriCal Workflows") MetriCal offers two approaches to multi-camera calibration: combined (all-at-once) and staged. The best approach depends on your specific hardware setup and calibration requirements. ### Combined Calibration Approach[​](#combined-calibration-approach "Direct link to Combined Calibration Approach") If all your cameras can simultaneously view calibration targets, you can run a direct multi-camera calibration: ``` # Initialize the calibration metrical init -m /camera1:eucm -m /camera2:eucm $INIT_PLEX # Run the calibration metrical calibrate -o $RESULTS $DATA $INIT_PLEX $OBJ ``` This approach calibrates both intrinsics and extrinsics in a single step. ### Staged Calibration Approach for Complex Camera Rigs[​](#staged-calibration-approach-for-complex-camera-rigs "Direct link to Staged Calibration Approach for Complex Camera Rigs") For large or complex camera rigs where: * Cameras are mounted far apart * Some cameras are in hard-to-reach positions * It's difficult to have all cameras view targets simultaneously A staged approach is recommended: ``` # Step 1: Calibrate individual cameras (intrinsics only) # Front camera metrical init -m /front_cam:eucm $FRONT_INIT_PLEX metrical calibrate -o $FRONT_CAM_RESULTS $FRONT_CAM_DATA $FRONT_INIT_PLEX $OBJ # Back camera metrical init -m /back_cam:eucm $BACK_INIT_PLEX metrical calibrate -o $BACK_CAM_RESULTS $BACK_CAM_DATA $BACK_INIT_PLEX $OBJ # Step 2: Run an extrinsics-only calibration using the individual results metrical init -p $FRONT_CAM_RESULTS -p $BACK_CAM_RESULTS -m *cam*:eucm $RIG_INIT_PLEX metrical calibrate -o $RIG_RESULTS $RIG_DATA $RIG_INIT_PLEX $OBJ ``` This staged approach allows you to: 1. Capture optimal data for each camera's intrinsics (getting close to each camera to fill its FOV) 2. Use a separate dataset with targets visible to multiple cameras to determine extrinsics 3. Avoid the logistical challenges of trying to get optimal data for all cameras simultaneously The final calibration (`$RIG_RESULTS`) will contain both intrinsics and extrinsics for all cameras. For details on calibrating individual cameras, see the [Single Camera Calibration guide](/metrical/calibration_guides/single_camera_cal.md). ## Practical Example[​](#practical-example "Direct link to Practical Example") We've captured an example of a good multi-camera calibration dataset that you can use to test out MetriCal. If it's your first time performing a multi-camera calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. ### Running the Example Dataset[​](#running-the-example-dataset "Direct link to Running the Example Dataset") First, [download the dataset here](https://drive.google.com/file/d/1NmWp4kXj8Ch6zd82AFmC8A90UEDkrl_I/view?usp=drive_link) and unzip it somewhere on your computer. Then, copy the following bash script into the dataset directory: warning This script assumes that you have either installed the apt version of MetriCal, or that you are using the docker version with an alias set to `metrical`. For more information, please review the MetriCal [installation instructions.](/metrical/configuration/installation.md) ``` #!/bin/bash -i DATA=observations INIT_PLEX=init_plex.json OBJ=obj.json OUTPUT=results.json REPORT=results.html metrical init \ -y \ -m color:opencv_radtan \ -m ir*:no_distortion \ $DATA $INIT_PLEX metrical calibrate \ --disable-motion-filter \ --report-path $REPORT \ -o $OUTPUT $DATA $INIT_PLEX $OBJ ``` Before running the script, let's take note of a couple things: * The `metrical init` command uses the `-m` flag to describe the system being calibrated. It indicates that the `color` topic should be calibrated with the `opencv_radtan` model, and that any topics prefixed with `ir` are already rectified, and do not need intrinsics calculated. The configuration generated by `metrical init` is saved to a file named `plex.json`, which will be used during the calibration step to configure the system. You can learn more about plexes [here.](/metrical/core_concepts/plex_overview.md) * `metrical calibrate` is being passed a `--disable-motion-filter` value because our example dataset is sequence of still frames rather than a full capture. If were were to remove this flag, all of our data would be filtered out because MetriCal would see very large jumps in position between frames. If you are calibrating a full `.mcap`, we generally recommend removing this flag and letting the motion filter run. You can read more about motion filtering [here.](/metrical/commands/calibrate.md#motion-filtering) Finally, note the `--render` flag being passed to `metrical calibrate`. This flag will allow us to watch the detection phase of the calibration as it happens in realtime. This can have a large impact on performance, but is invaluable for debugging data quality issues. Rendering has been enabled here so that you can watch the dataset as it's being processed. warning MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this script. You should now be ready to run the script. When you start it, it will display a visualization window like the following with detections overlaid on the camera frames: ![A screenshot of the MetriCal detection visualization](/assets/images/multicam_visualization-aa3e330f19d5f96b23609f9aff7fadae.png) While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of camera coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture. When the script finishes, you'll be left with three artifacts: * `plex.json` - as described in the prior section. * `report.html` - a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.json` - a file containing the final calibration and various other metrics. You can learn more about results json files [here](/metrical/results/output_file.md) and about manimulating your results using `shape` commands [here](/metrical/commands/shape/shape_overview.md). And that's it! Hopefully this trial run will have given you a better understanding of how to capture your own multi-camera calibration. ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") ### Best Practices[​](#best-practices "Direct link to Best Practices") | DO | DON'T | | ---------------------------------------------------------------------- | ------------------------------------------------------------------------ | | ✅ Ensure target is visible to multiple cameras simultaneously. | ❌ Calibrate each camera independently without overlap. | | ✅ Maximize overlap between camera views. | ❌ Have overlap only at the peripheries of wide-angle lenses. | | ✅ Keep targets in focus in all cameras. | ❌ Capture blurry or out-of-focus images in any camera. | | ✅ Capture the target across the entire field of view for each camera. | ❌ Only place the target in a small part of each camera's field of view. | | ✅ Rotate the target 90° for some captures. | ❌ Keep the target in only one orientation. | | ✅ Capture the target from various angles to maximize convergence. | ❌ Only capture the target from similar angles. | | ✅ Pause between poses to avoid motion blur. | ❌ Move the target continuously during capture. | ### Maximize Overlap Between Images[​](#maximize-overlap-between-images "Direct link to Maximize Overlap Between Images") While it's important to fill the full field-of-view of each individual camera to determine distortions, for multi-camera calibration, cameras must **jointly observe the same object space** to determine the relative extrinsics between them. Once you've observed across the entire field-of-view of each camera individually, focus on capturing the object space in multiple cameras from the same position. The location of this overlap is also important. For example, when working with very-wide field-of-view lenses, having overlap only at the peripheries can sometimes produce odd results, because the overlap is largely contained in high distortion areas of the image. Aim for overlap in varying regions of the cameras' fields of view. ### Basic Camera Calibration Principles[​](#basic-camera-calibration-principles "Direct link to Basic Camera Calibration Principles") All of the principles that apply to single camera calibration also apply to each camera in a multi-camera setup: #### Keep Targets in Focus[​](#keep-targets-in-focus "Direct link to Keep Targets in Focus") Ensure all cameras in your system are focused properly. A lens focused at infinity is recommended for calibration. Knowing the depth of field for each camera helps ensure you never get blurry images in your data. #### Consider Target Orientations[​](#consider-target-orientations "Direct link to Consider Target Orientations") Collect data where the target is captured at both 0° and 90° orientations to de-correlate errors in x and y measurements. This applies to all cameras in your multi-camera setup. #### Fill the Full Field of View[​](#fill-the-full-field-of-view "Direct link to Fill the Full Field of View") For each camera in your setup, ensure you capture the target across the entire field of view, especially near the edges where distortion is greatest. #### Maximize Convergence Angles[​](#maximize-convergence-angles "Direct link to Maximize Convergence Angles") The convergence angle of each camera's pose relative to the object space is important. Aim for convergence angles of 70° or greater when possible. ## Advanced Considerations[​](#advanced-considerations "Direct link to Advanced Considerations") ### Using Multiple Object Spaces[​](#using-multiple-object-spaces "Direct link to Using Multiple Object Spaces") When working with multiple cameras, using multiple object spaces (calibration targets) or a non-planar target can be particularly beneficial. This provides: 1. Better depth variation, which helps reduce projective compensation 2. More opportunities for overlap between cameras with different fields of view or orientations 3. Improved extrinsics estimation between cameras For calibrating cameras with minimal overlap in their fields of view, using multiple targets at different positions can help create indirect connections between cameras that don't directly observe the same space. ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors](/metrical/commands/command_errors.md) documentation. Common issues with multi-camera calibration include: * No features detected (cal-calibrate-001) * Initial camera pose estimates failed to converge (cal-calibrate-010) * Compatible component type has no detections (cal-calibrate-008) Remember that for successful multi-camera calibration, it's essential that the cameras have overlapping views of the calibration target during at least some portion of the data capture process. --- # Single Camera Calibration MetriCal's camera calibration is a joint process, which includes calibrating intrinsics and extrinsics at the same time. This guide provides tips and best practices specifically for single camera calibration. ## Video Tutorial[​](#video-tutorial "Direct link to Video Tutorial") ## MetriCal Workflows[​](#metrical-workflows "Direct link to MetriCal Workflows") ### Basic Calibration Command[​](#basic-calibration-command "Direct link to Basic Calibration Command") To run a calibration for a single camera, use the following commands: ``` # Initialize the calibration metrical init -m /camera:eucm $INIT_PLEX # Run the calibration metrical calibrate -o $CAMERA_RESULTS $CAMERA_DATA $INIT_PLEX $OBJ ``` Where: * `/camera` is the topic name for your camera * `eucm` is the camera model (other options include `pinhole`, `kb4`, etc.) * `$INIT_PLEX` is the path to the initialization file * `$CAMERA_RESULTS` is the output path for the calibration results * `$CAMERA_DATA` is the path to your captured data * `$OBJ` is the path to your object space definition ### Seeding Larger Calibrations[​](#seeding-larger-calibrations "Direct link to Seeding Larger Calibrations") Single camera intrinsics calibration is often the first step in a multi-stage calibration process. When dealing with complex rigs where cameras are mounted in hard-to-reach positions (e.g., several feet off the ground), it can be beneficial to: 1. Calibrate each camera's intrinsics individually using optimal data for that camera 2. Use these individual calibrations to seed a larger multi-camera or sensor fusion calibration This approach allows you to get up close to each camera individually to fully cover its field of view, which might not be possible when capturing data for the entire rig simultaneously. The resulting calibration file from this process can be used as input to subsequent calibrations using the `-p` flag: ``` # Save the single camera calibration results metrical calibrate -o $CAMERA_RESULTS $CAMERA_DATA $INIT_PLEX $OBJ # Use these results in a subsequent calibration metrical init -p $CAMERA_RESULTS -m /camera:eucm $NEW_INIT_PLEX ``` See the [Multi-Camera Calibration guide](/metrical/calibration_guides/multi_camera_cal.md) for details on how to combine multiple single-camera calibrations. ## Practical Example[​](#practical-example "Direct link to Practical Example") We've captured an example of a good single camera calibration dataset that you can use to test out MetriCal. If it's your first time performing a single camera calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. ### Running the Example Dataset[​](#running-the-example-dataset "Direct link to Running the Example Dataset") First, [download the dataset here](https://drive.google.com/file/d/1NmWp4kXj8Ch6zd82AFmC8A90UEDkrl_I/view?usp=drive_link) and unzip it somewhere on your computer. Then, copy the following bash script into the dataset directory: warning This script assumes that you have either installed the apt version of MetriCal, or that you are using the docker version with an alias set to `metrical`. For more information, please review the MetriCal [installation instructions.](/metrical/configuration/installation.md) ``` #!/bin/bash -i DATA=observations INIT_PLEX=init_plex.json OBJ=obj.json OUTPUT=results.json REPORT=results.html metrical init \ -y \ -m color:opencv_radtan \ $DATA $INIT_PLEX metrical calibrate \ --disable-motion-filter \ --report-path $REPORT \ -o $OUTPUT $DATA $INIT_PLEX $OBJ ``` Before running the script, let's take note of a couple things: * The `metrical init` command uses the `-m` flag to describe the system being calibrated. It indicates that the `color` topic should be calibrated with the `opencv_radtan` model. Note that this example is the same one used by our [mult-camera example](/metrical/calibration_guides/multi_camera_cal.md), so there are additionaly images in the dataset that will be ignored. The configuration generated by `metrical init` is saved to a file named `plex.json`, which will be used during the calibration step to configure the system. You can learn more about plexes [here.](/metrical/core_concepts/plex_overview.md) * `metrical calibrate` is being passed a `--disable-motion-filter` value because our example dataset is sequence of still frames rather than a full capture. If were were to remove this flag, all of our data would be filtered out because MetriCal would see very large jumps in position between frames. If you are calibrating a full `.mcap`, we generally recommend removing this flag and letting the motion filter run. You can read more about motion filtering [here.](/metrical/commands/calibrate.md#motion-filtering) Finally, note the `--render` flag being passed to `metrical calibrate`. This flag will allow us to watch the detection phase of the calibration as it happens in realtime. This can have a large impact on performance, but is invaluable for debugging data quality issues. Rendering has been enabled here so that you can watch the dataset as it's being processed. warning MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this script. You should now be ready to run the script. When you start it, it will display a visualization window like the following with detections overlaid on the camera frames: ![A screenshot of the MetriCal detection visualization](/assets/images/single_cam_visualization-c2c7649ec7c80af19c9c22c9400bc153.png) While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of camera coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture. When the script finishes, you'll be left with three artifacts: * `plex.json` - as described in the prior section. * `report.html` - a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.json` - a file containing the final calibration and various other metrics. You can learn more about results json files [here](/metrical/results/output_file.md) and about manimulating your results using `shape` commands [here](/metrical/commands/shape/shape_overview.md). And that's it! Hopefully this trial run will have given you a better understanding of how to capture your own single camera calibration. ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") ### Best Practices[​](#best-practices "Direct link to Best Practices") | DO | DON'T | | ------------------------------------------------------------------------------------------- | -------------------------------------------------------------- | | ✅ Keep targets in focus - use a lens focused at infinity when possible. | ❌ Capture blurry or out-of-focus images. | | ✅ Capture the target across the entire field of view, especially near the edges. | ❌ Only place the target in a small part of the field of view. | | ✅ Rotate the target 90° for some captures (or rotate the camera if target is fixed). | ❌ Keep the target in only one orientation. | | ✅ Capture the target from various angles to maximize convergence (aim for 70° or greater). | ❌ Only capture the target from similar angles. | | ✅ Make the target appear as large as possible in the image. | ❌ Keep the target too far away from the camera. | | ✅ Pause between poses to avoid motion blur. | ❌ Move the target or camera continuously during capture. | ### Keep Targets in Focus[​](#keep-targets-in-focus "Direct link to Keep Targets in Focus") This tip is an absolute must when capturing data with cameras. Data that is captured out-of-focus breaks underlying assumptions that are made about the relationship between the image projection and object space; this, in turn, breaks the projection model entirely! A lens focused at infinity captures objects far away with ease without defocusing, so this is the recommended setting for calibration. Care should be taken, therefore, not to get a target too *close* to a lens that it blurs out the image. This near-to-far range in which objects are still focused is called a camera's [*depth of field*](//en.wikipedia.org/wiki/Depth_of_field). Knowing your depth of field can ensure you never get a blurry image in your data. It should be noted that a lens with a shorter focal length, i.e. wide field of view, tends to stay in focus over larger depths of field. ### Consider Your Target Orientations[​](#consider-your-target-orientations "Direct link to Consider Your Target Orientations") One of the largest sources of projective compensation comes from x and y correlations in the image space observations. All of these effects are especially noticeable when there is little to no depth variation in the target field. It is almost always helpful to collect data where the target field is collected at 0° and 90° orientations: ![Rotating a target by 90°](/assets/images/rotate_target-fd1bf73ca42b0dfe9e355481a276c251.png) When the object space is captured at 0°: * x image measurements directly measure X in object space * y image measurements directly measure Y in object space When the object space is captured at 90°: * x image measurements directly measure Y in object space * y image measurements directly measure X in object space This process de-correlates errors in x and y, because small errors in the x and y image measurements are statistically independent. There is no need to collect more data beyond 0° and 90° rotations; the two orientations alone do enough. Helping You Help Yourself Note that the same trick can be applied if the *target* is static, and the camera is rotated relative to the targets instead. ### Consider the Full Field Of View[​](#consider-the-full-field-of-view "Direct link to Consider the Full Field Of View") **The bigger the target is in the image, the better**. A common data capture mistake is to only place the target in a small part of the field of view of the camera. This makes it extremely difficult to model radial distortions, especially if the majority of the data is in the center of the extent of the image. To mitigate this, object points should be observed across as much of the camera's field of view as is possible. ![Good capture vs. Bad capture](/assets/images/good_capture-458fcbf2012259c71b5ab74656521943.png) It is especially important to get image observations with targets near the periphery of the image, because this where distortion is the greatest, and where it needs to be characterized the best. As a good rule-of-thumb, a ratio of about 1:1 is good for the target width (Xc​) compared to the distance to the target (Zc​). For a standard 10×7 checkerboard, this would mean: * 10 squares in the X-direction, each of length 0.10m * 7 squares in the Y-direction of length 0.10m * Held 1m away from the camera during data collection This gives a ratio of 1:1 in X, and 7:10 in Y. ![Good target ratio](/assets/images/target_ratio-6327e69a45dd30f0c7322af0f3953a9c.png) In general, increasing the size of the target field is preferred to moving too close to the camera (see "Keep Targets In Focus" above). However, both can be useful in practice. ![Adjusting your target to get good coverage](/assets/images/target_size-fefaf301152a0d94acb9ff325274a25d.png) ### Maximize the Convergence Angle[​](#maximize-the-convergence-angle "Direct link to Maximize the Convergence Angle") The convergence angle of the camera's pose relative to the object space is a major factor in the determination of our focal length f, among other parameters. The more the angle changes in a single dataset, the better; In most scenarios, reaching a convergence angle of 70° or greater is recommended. ![top-down angles](/assets/images/top_down_angle-1ba9559f10bf6e5bfa75573839e396c8.png) ![side angles](/assets/images/side_angle-421cc2575a9bf5cb9f07a2ae363fc663.png) It is worth noting that other data qualities shouldn't be sacrificed for better angles. It is still important for the image to be in focus and for the targets to be observable. ## Advanced Considerations[​](#advanced-considerations "Direct link to Advanced Considerations") ### The Importance of Depth Variation[​](#the-importance-of-depth-variation "Direct link to The Importance of Depth Variation") note This point is more of a consideration than a requirement. At the very least, it should serve to provide you with more intuition about the calibration data capture process. Using a single, flat planar target provides very little variation in object space depth Z. Restricting object space to a single plane introduces projective compensation in all sorts of ways: * f to all object space Z coordinates * p1​ and p2​ to both f and extrinsic rotations about X and Y (for Brown-Conrady) * f to k1​ through k4​ (for Kannala-Brandt) A **non-planar target**, or a combination of targets using multiple object spaces helps to mitigate this effect by adding *depth variation* in Z. In general, more depth variation is better. For those with only a single, planar calibration targets — know that MetriCal can *still* give great calibration results given the other data capture guidelines are followed. ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors](/metrical/commands/command_errors.md) documentation. Common issues with single camera calibration include: * No features detected (cal-calibrate-001) * Initial camera pose estimates failed to converge (cal-calibrate-010) --- # Camera Models Below are all supported camera intrinsics models in MetriCal. If there is a model that you use that is not listed here, just [contact us](/metrical/support_and_admin/contact.md)! We're always looking to expand our support. ## Common Variables and Definitions[​](#common-variables-and-definitions "Direct link to Common Variables and Definitions") | Variables | Description | | ------------------------- | ------------------------------------------------------------------------------ | | xc​, yc​ | Pixel coordinates in the image plane, with origin at the principal point | | X, Y, Z | Feature coordinates in the world, in 3D Euclidean Space | | X^, Y^, Z^ | Corrected camera ray, in homogeneous coordinates centered on the camera origin | | X^dist​, Y^dist​, Z^dist​ | Distorted camera ray, in homogeneous coordinates centered on the camera origin | **Modeling** (3D → 2D) refers to projecting a 3D point in the world to a 2D point in the image, given the intrinsics provided. In other words, it *models* the effect of distortion on a 3D point. This is also known as "forward projection". **Correcting** (2D → 3D) refers to the process of finding the camera ray that is created when intrinsics are applied to a 2D point in the image. When "undistorting" a pixel, this can be thought of as finding the corrected ray's point of intersection with the image plane. In other words, it *corrects* for the effect of distortion. This is also known as "inverse projection". **Unified** refers to a model that can be used to both model and correct for distortion. ## Camera Model Descriptions[​](#camera-model-descriptions "Direct link to Camera Model Descriptions") * No Distortion * OpenCV RadTan * OpenCV Fisheye * OpenCV Rational * Pinhole with Brown-Conrady * Pinhole with Kannala-Brandt * EUCM * Double Sphere * Omnidirectional * Power Law ### No Distortion[​](#no-distortion "Direct link to No Distortion") MetriCal keyword: `no_distortion` This model is a classic pinhole projection with no distortion or affine effects. This model is most applicable when you're already correcting your images with a rectification process, or when you're using a camera with a very low distortion profile. | Parameter | Description | | --------- | ------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | De facto, nearly any camera that is already corrected for distortion uses this model. All models on this page are either pinhole models by design, or degrade to a pinhole model when no distortion is present. xc​yc​​=fZX​+cx​=fZY​+cy​​​ ### OpenCV RadTan[​](#opencv-radtan "Direct link to OpenCV RadTan") Original Reference OpenCV.org. Camera Calibration and 3D Reconstruction documentation. OpenCV 4.10-dev. MetriCal keyword: `opencv_radtan` Type: **Modeling** This is based on OpenCV's default distortion model, which is a modified Brown-Conrady model. If you've ever used OpenCV, you've most certainly used this. | Parameter | Description | | --------- | --------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | p1​ | first tangential distortion term | | p2​ | second tangential distortion term | ### Common Use Cases[​](#common-use-cases "Direct link to Common Use Cases") OpenCV's adoption of this model has made it the de facto starting point for most calibration tasks. However, this does *not* mean it's multi-purpose. OpenCV RadTan is best suited for cameras with a field of view of 90° or less. A good number of sensor packages use this model, including: | Model | Cameras With Model | | --------------------- | ------------------ | | Intel RealSense D435 | All cameras | | Intel RealSense D435i | All cameras | | Intel RealSense D455 | All cameras | ### Modeling[​](#modeling "Direct link to Modeling") X′Y′r2r′Xdist​Ydist​xc​yc​​=X/Z=Y/Z=X′2+Y′2=(1+k1​r2+k2​r4+k3​r6)=r′X′+2p1​X′Y′+p2​(r2+2X′2)=r′Y′+2p2​X′Y′+p1​(r2+2Y′2)=fXdist​+cx​=fYdist​+cy​​​ ### Correcting[​](#correcting "Direct link to Correcting") Correcting for OpenCV RadTan is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal. ### OpenCV Fisheye[​](#opencv-fisheye "Direct link to OpenCV Fisheye") Original Reference OpenCV.org. Fisheye camera model documentation. OpenCV 4.10-dev. MetriCal keyword: `opencv_fisheye` Type: **Modeling** This model is based on OpenCV's Fisheye lens model, which is a modified Kannala-Brandt model. It has no tangential distortion terms, but is robust to wide-angle lens distortion. | Parameter | Description | | --------- | ----------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | k4​ | fourth radial distortion term | ### Common Use Cases[​](#common-use-cases-1 "Direct link to Common Use Cases") Any camera with a fisheye lens below 140° diagonal field of view will probably benefit from this model. ### Modeling[​](#modeling-1 "Direct link to Modeling") X′Y′r2θθd​Xdist​Ydist​xc​yc​​=X/Z=Y/Z=X′2+Y′2=atan(r)=θ(1+k1​θ2+k2​θ4+k3​θ6+k4​θ8)=(rθd​​)X′=(rθd​​)Y′=fXdist​+cx​=fYdist​+cy​​​ ### Correcting[​](#correcting-1 "Direct link to Correcting") Correcting for OpenCV Fisheye is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal. ### OpenCV Rational[​](#opencv-rational "Direct link to OpenCV Rational") Original Reference OpenCV.org. Camera Calibration and 3D Reconstruction documentation. OpenCV 4.10-dev. MetriCal keyword: `opencv_rational` Type: **Modeling** OpenCV Rational is the full distortion model used by OpenCV. It is an extension of the RadTan model, in which the radial distortion is modeled as a rational function. | Parameter | Description | | --------- | --------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | p1​ | first tangential distortion term | | p2​ | second tangential distortion term | | k4​ | fourth radial distortion term | | k5​ | fifth radial distortion term | | k6​ | sixth radial distortion term | ### Common Use Cases[​](#common-use-cases-2 "Direct link to Common Use Cases") This model is mostly equivalent to OpenCV RadTan. Given the number of parameters, this model can overfit on the available data, making it tricky to generalize. However, if you know your camera benefits from this specification, it does offer additional flexibility. As this is a modified version of OpenCV RadTan, we recommend its use for lenses with a field of view of 90° or less. ### Modeling[​](#modeling-2 "Direct link to Modeling") X′Y′r2r′Xdist​Ydist​xc​yc​​=X/Z=Y/Z=X′2+Y′2=1+k4​r2+k5​r4+k6​r61+k1​r2+k2​r4+k3​r6​=r′X′+2p1​X′Y′+p2​(r2+2X′2)=r′Y′+2p2​X′Y′+p1​(r2+2Y′2)=fXdist​+cx​=fYdist​+cy​​​ ### Correcting[​](#correcting-2 "Direct link to Correcting") Correcting for OpenCV Rational is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal. ### Pinhole with (Inverse) Brown-Conrady[​](#pinhole-with-inverse-brown-conrady "Direct link to Pinhole with (Inverse) Brown-Conrady") Original Publication A. E. Conrady, Decentred Lens-Systems, Monthly Notices of the Royal Astronomical Society, Volume 79, Issue 5, March 1919, Pages 384–390, MetriCal keyword: `pinhole_with_brown_conrady` Type: **Correcting** This model is the first of our *inverse* models. These models *correct* for distortion terms in the image space, rather than *modeling* the effects of distortion in the world space. If you're just looking to correct for distortion, this could be the model for you! Notice that the model parameters are identical to those found in OpenCV RadTan. This is because both models are based off of the Brown-Conrady approach to camera modeling. Don't mix up one for the other! | Parameter | Description | | --------- | --------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | p1​ | first tangential distortion term | | p2​ | second tangential distortion term | ### Common Use Cases[​](#common-use-cases-3 "Direct link to Common Use Cases") If you're just correcting for distortion, rather than modeling it, this model is a good choice. ### Modeling[​](#modeling-3 "Direct link to Modeling") Modeling distortion for Inverse Brown-Conrady is a non-linear process. The most common method is to run a non-linear optimization to find the distorted point. This is the method used in MetriCal. ### Correcting[​](#correcting-3 "Direct link to Correcting") r2r′xcorr​ycorr​X^Y^Z^​=xc2​+yc2​=(k1​r2+k2​r4+k3​r6)=r′xc​+p1​(r2+2xc2​)⋅2p2​xc​yc​=r′yc​+p2​(r2+2yc2​)+2p1​xc​yc​=xc​−xcorr​=yc​−ycorr​=f​​ ### Pinhole with (Inverse) Kannala-Brandt[​](#pinhole-with-inverse-kannala-brandt "Direct link to Pinhole with (Inverse) Kannala-Brandt") Original Publication J. Kannala and S. S. Brandt, "A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1335-1340, Aug. 2006, doi: 10.1109/TPAMI.2006.153.  MetriCal keyword: `pinhole_with_kannala_brandt` Type: **Correcting** Inverse Kannala-Brandt follows the same paradigm as our other Inverse models: it *corrects* distortion in image space, rather than *modeling* it in world space. Inverse Kannala-Brandt is close to the original Kannala-Brandt model, and therefore shares the same set of distortion parameters as OpenCV Fisheye. Don't get them mixed up! | Parameter | Description | | --------- | ----------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | k4​ | fourth radial distortion term | ### Common Use Cases[​](#common-use-cases-4 "Direct link to Common Use Cases") If you're just correcting for distortion, rather than modeling it, this model is a good choice for lenses with a field of view of 140° or less. ### Modeling[​](#modeling-4 "Direct link to Modeling") Modeling distortion for Inverse Kannala-Brandt is a non-linear process. The most common method is to run a non-linear optimization to find the distorted point. This is the method used in MetriCal. ### Correction[​](#correction "Direct link to Correction") r2θr′xcorr​ycorr​X^Y^Z^​=xc2​+yc2​=atan(fr​)=θ(1+k1​θ2+k2​θ4+k3​θ6+k4​θ8)=r′(rxc​​)=r′(ryc​​)=xc​−xcorr​=yc​−ycorr​=f​​ ### EUCM[​](#eucm "Direct link to EUCM") Original Publication Khomutenko, B., Garcia, G., & Martinet, P. (2016). An Enhanced Unified Camera Model. IEEE Robotics and Automation Letters, 1(1), 137–144. doi:10.1109/lra.2015.2502921. MetriCal keyword: `eucm` Type: **Unified** EUCM stands for *Enhanced Unified Camera Model* (aka *Extended Unified Camera Model*), and is a riff on the *Unified Camera Model*. This model is "unified" because it offers direct calculation of both modeling and correcting for distortion. At lower distortion levels, this model naturally degrades into a pinhole model. This model is also unique in that it primarily operates on camera rays, rather than requiring a certain focal length or pixel distance to mathematically operate. The Modeling and Correction operations below will convert camera rays to — and from — a distorted state. Users who wish to operate in the image plane should convert these homogeneous coordinates to a Z^ matching the focal length. | Parameter | Description | | --------- | ---------------------------------------------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | α | The distortion parameter that relates ellipsoid and pinhole projection | | β | The distortion parameter that controls the ellipsoid shape | ### Common Use Cases[​](#common-use-cases-5 "Direct link to Common Use Cases") EUCM is (at time of writing) becoming a popular choice for cameras with strong radial distortion. Its ability to model distortion in a way that is both accurate and efficient makes it a good choice for many applications. It is capable of handling distortions for lenses with a field of view greater than 180°. ### Modeling[​](#modeling-5 "Direct link to Modeling") dγsX^dist​Y^dist​Z^dist​xc​yc​​=(β(X^2+Y^2)+Z^2)​=1−α=αd+γZ^1​=X^s=Y^s=1=fX^dist​+cx​=fY^dist​+cy​​​ ### Correction[​](#correction-1 "Direct link to Correction") (mx​,my​)r2γmz​sX^Y^Z^​=(Z^dist​X^dist​​,Z^dist​Y^dist​​)=mx2​+my2​=1−α=α1−(αγ)βr2​+γ1−βα2r2​=r2+mz2​​1​=mx​s=my​s=mz​s​​ ### Double Sphere[​](#double-sphere "Direct link to Double Sphere") Original Publication Usenko, V., Demmel, N., & Cremers, D. (2018). The Double Sphere Camera Model. 2018 International Conference on 3D Vision (3DV). doi:10.1109/3dv.2018.00069  MetriCal keyword: `double_sphere` Type: **Unified** Double Sphere is the newest model on this page. Like EUCM, it also offers direct computation of both modeling and correction on camera rays. However, it uses two spheres to model the effects of even strong radial distortion. At lower distortion levels, this model naturally degrades into a pinhole model. | Parameter | Description | | --------- | -------------------------------------------------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | ξ | The distortion parameter corresponding to distance between spheres | | α | The distortion parameter that relates second sphere and pinhole projection | ### Common Use Cases[​](#common-use-cases-6 "Direct link to Common Use Cases") Its use of projection through its "double spheres" makes it an ideal model for ultra-wide field of view lenses. The lenses tested in its original publication had fields of view ranging from 122° to 195°! ### Modeling[​](#modeling-6 "Direct link to Modeling") d1​Zmod​d2​sX^dist​Y^dist​Z^dist​xc​yc​​=X^2+Y^2+Z^2​=ξd1​+Z^=X^2+Y^2+Zmod2​​=αd2​+(1−α)Zmod​1​=X^s=Y^s=1=fX^dist​+cx​=fY^dist​+cy​​​ ### Correcting[​](#correcting-4 "Direct link to Correcting") (mx​,my​)r2mz​sX^Y^Z^​=(Z^dist​X^dist​​,Z^dist​Y^dist​​)=mx2​+my2​=α1−(2α−1)r2​+(1−α)(1−α2r2)​=mz2​+r2mz​ξ+mz2​+(1−ξ2)r2​​=mx​s=my​s=mz​s−ξ​​ ### Omnidirectional (Omni)[​](#omnidirectional-omni "Direct link to Omnidirectional (Omni)") Original Publication Mei, C. and Rives, P. Single View Point Omnidirectional Camera Calibration from Planar Grids. Proceedings 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 2007, pp. 3945-3950, doi: 10.1109/ROBOT.2007.364084. MetriCal keyword: `omni` Type: **Modeling** The Omnidirectional camera model is designed to handle the unique distortion profile of a *catadioptric* camera system, or one that combines mirrors and lenses together. These systems are often used for camera systems that seek to capture a full 360° field of view (or close to it). In "[A Unifying Theory for Central Panoramic Systems and Practical Implications](https://link.springer.com/content/pdf/10.1007/3-540-45053-X_29.pdf)" (Geyer, Daniilidis), the authors show that all mirror surfaces can be modeled with a projection from an imaginary unit sphere onto a plane perpendicular to the sphere center and the conic created by the mirror. The Omnidirectional camera model codifies this relationship mathematically. Yes, it's a little confusing We won't go into it here, but the papers linked above explain it well. The Omnidirectional camera model also implements the same radial and tangential distortion terms as [OpenCV RadTan](#opencv-radtan). However, while OpenCV RadTan uses 3 radial distortion terms, this only uses 2. The reason for this? Everyone else did it (even the authors' original implementation), so now it's convention. | Parameter | Description | | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | f | generalized focal length (px). This term is not a "true" focal length, but rather the camera focal length scaled by a collinear factor η that represents the effect of the mirror | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | ξ | the distance between the unit sphere and the "projection" sphere upon which the focal plane is projected | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | p1​ | first tangential distortion term | | p2​ | second tangential distortion term | ### Common Use Cases[​](#common-use-cases-7 "Direct link to Common Use Cases") The omnidirectional camera model is best used for extreme fields of view. Anything greater than 140° would be well-served by this model. ### Modeling[​](#modeling-7 "Direct link to Modeling") XXS​XC​X′Y′r2r′Xdist​Ydist​xc​yc​​=\[X,Y,Z]=∥X∥X​=\[XS​,YS​,ZS​+ξ]=(ZC​XC​​)=(ZC​YC​​)=X′2+Y′2=1+k1​r2+k2​r4=r′X′+2p1​X′Y′+p2​(r2+2X′2)=r′Y′+2p2​X′Y′+p1​(r2+2Y′2)=fXdist​+cx​=fYdist​+cy​​​ ### Correcting[​](#correcting-5 "Direct link to Correcting") Though the Omnidirectional model technically has a unified inversion, the introduction of the radial and tangential distortion means that correcting is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal. ### Power Law[​](#power-law "Direct link to Power Law") Original Reference This is an internal model developed by Tangram Vision for modeling distortion with a simple power law. MetriCal keyword: `power_law` Type: **Modeling** The Power Law camera model describes distortion as a power law, with distortion increasing exponentially according to the `alpha` term. The `beta` term is a linear parameter on the radial distance. This model is designed to be simple yet effective for many camera types, particularly for lenses with moderate to strong radial distortion. The full form of the model is: ru′v′​=u2+v2​=u−β⋅rα⋅ru​=v−β⋅rα⋅rv​​​ When simplified, the distortion profile itself can be expressed as: dist\_radial​=1−β⋅rα−1​​ | Parameter | Description | | --------- | ------------------------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | α | exponential distortion parameter (must be >= 1.0) | | β | linear distortion parameter | ### Common Use Cases[​](#common-use-cases-8 "Direct link to Common Use Cases") The Power Law model provides a nice balance between simplicity and expressiveness. It can handle a wide range of camera distortion profiles with just two parameters. It works well for: * Cameras with moderate to strong radial distortion * Systems where computational efficiency is important * Cases where you need a simple model that can still handle non-linear distortion effects ### Modeling[​](#modeling-8 "Direct link to Modeling") X′Y′rdist\_radialXdist​Ydist​xc​yc​​=X/Z=Y/Z=X′2+Y′2​=1−β⋅rα−1=X′⋅dist\_radial=Y′⋅dist\_radial=f⋅Xdist​+cx​=f⋅Ydist​+cy​​​ ### Correcting[​](#correcting-6 "Direct link to Correcting") Correcting for the Power Law model is a non-linear process, requiring iterative optimization. MetriCal uses a Levenberg-Marquardt algorithm to find the corrected point, since this model can be sensitive at certain values of alpha and beta. --- # IMU Models Below are all supported IMU intrinsics models in MetriCal. If there is a model that you use that is not listed here, just [contact us](/metrical/support_and_admin/contact.md)! We're always looking to expand our support. ## Common Variables and Definitions[​](#common-variables-and-definitions "Direct link to Common Variables and Definitions") | Variables | Description | | --------- | ------------------------------------------------------------------------------------------------------- | | ωi | Corrected (Calibrated) angular velocity. | | fi | Corrected (Calibrated) specific force. | | ω^g | Distorted (Uncalibrated) gyroscope measurement. | | f^​a | Distorted (Uncalibrated) accelerometer measurement. Also referred to as the specific force measurement. | **Specific Force** is the mass-specific force experienced by the proof mass in an accelerometer. This is commonly known as the accelerometer measurement. It differs from acceleration in that it has an additive term due to gravity operating on the proof mass in the accelerometer. To convert between acceleration of the accelerometer and specific-force, one can apply the equation fi=ai+gi​​ where gi is the gravity vector expressed in the IMU's frame. **Modeling** (Calibrated → Uncalibrated) refers to transforming an ideal angular velocity, or specific force and modeling the uncalibrated measurement produced by an IMU sensor given the intrinsics provided. In other words, it *models* the effect of the intrinsics on the angular velocity or specific force experienced by the IMU. **Correcting** (Uncalibrated → Calibrated) refers to transforming an uncalibrated gyroscope or accelerometer measurement produced by an IMU sensor and correcting for intrinsics effects using the intrinsics provided. In other words, it *corrects* the measurement produced by an IMU approximating the true angular velocity or specific force experienced by the IMU. Correcting Models All the IMU intrinsic models used in MetriCal are correcting models in that they naturally express the corrected quantities in terms of the measured quantities. However, these models may also be used to model the IMU intrinsics. This simply requires inverting the model by solving for the measured quantities in terms of the corrected quantities. The modeling equations for each of the intrinsic models are given below. ## IMU Frames[​](#imu-frames "Direct link to IMU Frames") A coordinate frame is simply the position and directions defining the basis which can be used to numerically express a quantity. The coordinate frames used for IMU models are given in the following table. | Frame | Description | | ----- | ---------------------------------- | | a | The accelerometer coordinate frame | | g | The gyroscope coordinate frame | | i | The imu coordinate frame | In the currently supported IMU models in MetriCal, the a, g, and i coordinate frames are assumed to be at the same position in space. Furthermore, the a and g frames are not assumed to be orthonormal. The i frame, however, will always be a proper orthonormal coordinate frame. Numerical quantities use a superscript to describe their frame of reference. As such, the above frame variables should only be interpreted as a frame if they appear as a superscript. For example, the gravity vector in the IMU's frame is given as gi, but g should not be interpreted as a frame in this context. ## IMU Bias[​](#imu-bias "Direct link to IMU Bias") Recommended Reading Woodman, O. (2007). An introduction to inertial navigation. [Read here](https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-696.pdf). Farrell, J. Silva, F. Rahman, F. Wendel, J. (2022). IMU Error Modeling Tutorial. IEEE Control Systems Magazine, Vol 42, No 6, pp. 40-66. [Read here](https://escholarship.org/content/qt1vf7j52p/qt1vf7j52p_noSplash_750bb4e8a68d04f2d577450b1bf56572.pdf). IMUs have a time-varying bias on the measurements they produce. In MetriCal, this is modeled as an additive bias on top of the intrinsically distorted measurements. | Parameter | Mathematical Notation | Description | | --------------------- | --------------------- | ----------------------------------------------- | | `gyro_bias` | bfa​ | Additive bias on the accelerometer measurement. | | `specific_force_bias` | bgg​ | Additive bias on the gyroscope measurement. | By necessity, MetriCal infers the additive biases on both the accelerometer and the gyroscope measurements. However, the IMU bias is a time-varying quantity that is highly influenced by electrical noise, temperature fluctuations and other factors. These biases can change simply by power cycling the IMU sensor. As such, it is recommended that the IMU biases inferred by MetriCal are only used as an initial estimate of bias in applications. ## IMU Noise[​](#imu-noise "Direct link to IMU Noise") Noise Specification A full understanding of the IMU noise model is not necessary to use MetriCal. MetriCal chooses default values for the noise model that should work with most MEMS consumer grade IMUs. However, the interested reader is referred to the [IEEE Specification](https://ieeexplore.ieee.org/document/660628) Appendix C, where these values are more rigorously defined. In addition to bias and intrinsics, IMUs experience noise on their measurement outputs. This noise must be modeled to properly infer the bias and intrinsic parameters of an IMU. At a high-level, the noise model is given as x(t)=y(t)+n(t)+b(t)+k(t) where * x(t) is the noisy time-varying signal * y(t) is the true time-varying signal * n(t) is the component of the noise with constant power spectral density (white noise) * b(t) is the component of the noise with 1/f power spectral density (pink noise) * k(t) is the component of the noise with 1/f2 power spectral density (brown noise) This noise model is applied to each component of the gyroscope and accelerometer output. More details on this noise model can be found in the [IEEE Specification](https://ieeexplore.ieee.org/document/660628). ### Noise Parameters[​](#noise-parameters "Direct link to Noise Parameters") The noise parameters for the IMU noise model are given as | Parameter | Units | Description | | ----------------------------------------- | ---------- | -------------------------------------------------------------------- | | `angular_random_walk` | rad/s​ | Angular random walk coefficients | | `gyro_bias_instability` | rad/s | Gyroscope bias instability coefficients | | `gyro_rate_random_walk` | rad/s(3/2) | Gyroscope rate random walk coefficients | | `gyro_correlation_time_constant` | unitless | Gyroscope correlation time constant for bias instability process | | `gyro_turn_on_bias_uncertainty` | rad/s | Prior standard deviation of the gyroscope bias components | | `velocity_random_walk` | m/s(3/2) | Velocity random walk coefficients | | `accelerometer_bias_instability` | m/s2 | Accelerometer bias instability coefficients | | `accelerometer_rate_random_walk` | m/s(5/2) | Accelerometer rate random walk coefficients | | `accelerometer_correlation_time_constant` | unitless | Accelerometer correlation time constant for bias instability process | | `accelerometer_turn_on_bias_uncertainty` | rad/s | Prior standard deviation of the specific force bias components | MetriCal will automatically set these with reasonable values for consumer grade IMUs (like the BMI055 used in Intel's Realsense D435i). However, if you have better values from an IMU datasheet or an Allan Variance analysis, you can modify the defaults to better suit your needs. If you are using values given in a datasheet, be sure to convert to the units used by MetriCal, since many datasheets will give units like mg/s​ (milli-g's) or (∘/hour)/Hz​ ## IMU Model Descriptions[​](#imu-model-descriptions "Direct link to IMU Model Descriptions") * Scale * Scale and Shear * Scale, Shear, and Rotation * Scale, Shear, Rotation, and G-sensitivity ### Scale[​](#scale "Direct link to Scale") MetriCal keyword: `scale` This model corrects any scale factor errors in the accelerometer and gyroscope measurements. | Parameter | Mathematical Notation | Description | | ---------------------- | --------------------- | ------------------------------------------------------------------ | | `gyro_scale` | \[sgx​,sgy​,sgz​] | Scaling applied to each component of the gyroscope measurement | | `specific_force_scale` | \[sfx​,sfy​,sfz​] | Scaling applied to each component of the accelerometer measurement | Where the intrinsic correction matrices are formed from the other variables as Dg​Df​​=​sgx​00​0sgy​0​00sgz​​​=​sfx​00​0sfy​0​00sfz​​​​ #### Correction[​](#correction "Direct link to Correction") ωifi​=Dg​(ω^g−bgg​)=Df​(f^​a−bfa​)​ #### Modeling[​](#modeling "Direct link to Modeling") ω^gf^​a​=Dg−1​ωi+bgg​=Df−1​fi+bfa​​ ### Scale and Shear[​](#scale-and-shear "Direct link to Scale and Shear") MetriCal keyword: `scale_shear` This model corrects any scale factor errors and non-orthogonality errors in the accelerometer or gyroscope measurements. These non-orthogonality errors are often called "shear" errors which gives the scale and shear model its name. | Parameter | Mathematical Notation | Description | | ---------------------- | --------------------- | ----------------------------------------------------------------------- | | `gyro_scale` | \[sgx​,sgy​,sgz​] | Scaling applied to each component of the gyroscope measurement | | `specific_force_scale` | \[sfx​,sfy​,sfz​] | Scaling applied to each component of the accelerometer measurement | | `gyro_shear` | \[σgxy​,σgxz​,σgyz​] | Non-orthogonality compensation applied to the gyroscope measurement | | `accelerometer_shear` | \[σfxy​,σfxz​,σfyz​] | Non-orthogonality compensation applied to the accelerometer measurement | Where the intrinsic correction matrices are formed from the other variables as Dg​Df​​=​sgx​00​σgxy​sgy​0​σgxz​σgyz​sgz​​​=​sfx​00​σfxy​sfy​0​σfxz​σfyz​sfz​​​​ #### Correction[​](#correction-1 "Direct link to Correction") ωifi​=Dg​(ω^g−bgg​)=Df​(f^​a−bfa​)​ #### Modeling[​](#modeling-1 "Direct link to Modeling") ω^gf^​a​=Dg−1​ωi+bgg​=Df−1​fi+bfa​​ ### Scale, Shear, and Rotation[​](#scale-shear-and-rotation "Direct link to Scale, Shear, and Rotation") MetriCal keyword: `scale_shear_rotation` This model corrects any scale factor errors and non-orthogonality errors in the accelerometer or gyroscope measurements. Additionally, this model corrects for any rotational misalignment between the gyroscope and the IMU's frame. In this model, the IMU frame is assumed to be the same as the scale and shear corrected accelerometer frame. | Parameter | Mathematical Notation | Description | | ---------------------- | --------------------- | ----------------------------------------------------------------------- | | `gyro_scale` | \[sgx​,sgy​,sgz​] | Scaling applied to each component of the gyroscope measurement | | `specific_force_scale` | \[sfx​,sfy​,sfz​] | Scaling applied to each component of the accelerometer measurement | | `gyro_shear` | \[σgxy​,σgxz​,σgyz​] | Non-orthogonality compensation applied to the gyroscope measurement | | `accelerometer_shear` | \[σfxy​,σfxz​,σfyz​] | Non-orthogonality compensation applied to the accelerometer measurement | | `accel_from_gyro_rot` | Rgi​ | The rotation from the gyroscope frame to the IMU's frame | The intrinsic correction matrices are formed with the two equations Dg​Df​​=​sgx​00​σgxy​sgy​0​σgxz​σgyz​sgz​​​=​sfx​00​σfxy​sfy​0​σfxz​σfyz​sfz​​​​ #### Correction[​](#correction-2 "Direct link to Correction") ωifi​=Rgi​Dg​(ω^g−bgg​)=Df​(f^​a−bfa​)​ #### Modeling[​](#modeling-2 "Direct link to Modeling") ω^gf^​a​=Dg−1​Rig​ωi+bgg​=Df−1​fi+bfa​​ ### Scale, Shear, Rotation, and G-sensitivity[​](#scale-shear-rotation-and-g-sensitivity "Direct link to Scale, Shear, Rotation, and G-sensitivity") MetriCal keyword: `scale_shear_rotation_g_sensitivity` This model corrects any scale factor errors, non-orthogonality errors and rotational misalignment errors in the accelerometer or gyroscope measurements. Additionally, this model corrects for a phenomenon known as *G-sensitivity*. G-sensitivty is a property of an oscillating gyroscope that causes a gyroscope to register a bias on its measurements when the gyroscope experiences a specific-force. | Parameter | Mathematical Notation | Description | | -------------------------- | --------------------- | --------------------------------------------------------------------------- | | `gyro_scale` | \[sgx​,sgy​,sgz​] | Scaling applied to each component of the gyroscope measurement | | `specific_force_scale` | \[sfx​,sfy​,sfz​] | Scaling applied to each component of the accelerometer measurement | | `gyro_shear` | \[σgxy​,σgxz​,σgyz​] | Non-orthogonality compensation applied to the gyroscope measurement | | `accelerometer_shear` | \[σfxy​,σfxz​,σfyz​] | Non-orthogonality compensation applied to the accelerometer measurement | | `accel_from_gyro_rot` | Rgi​ | The rotation from the gyroscope frame to the IMU's frame | | `g_sensitivity` | \[γx​,γy​,γz​] | The in-axis G-sensitivity of the gyroscope induced by the specific-force | | `g_sensitivity_cross_axis` | \[γxy​,γxz​,γyz​] | The cross-axis G-sensitivity of the gyroscope induced by the specific-force | The intrinsic correction matrices are formed with the two equations Dg​Df​​=​sgx​00​σgxy​sgy​0​σgxz​σgyz​sgz​​​=​sfx​00​σfxy​sfy​0​σfxz​σfyz​sfz​​​​ and the G-sensitivity matrix is given by T​=​γx​00​γxy​γy​0​γxz​γyz​γz​​​​ #### Correction[​](#correction-3 "Direct link to Correction") ωifi​=Rgi​Dg​(ω^g−Tfi−bgg​)=Df​(f^​a−bfa​)​ Correction Order By adding G-sensivity, the gyroscope correction becomes dependent upon the specific-force correction. As such, it is necessary to first correct the accelerometer measurement and then use the corrected specific-force to correct the gyroscope measurement. #### Modeling[​](#modeling-3 "Direct link to Modeling") ω^gf^​a​=Dg−1​Rig​ωi+Tfi+bgg​=Df−1​fi+bfa​​ --- # LiDAR Models Below are all supported LiDAR intrinsics models in MetriCal. If there is a model that you use that is not listed here, just [contact us](/metrical/support_and_admin/contact.md)! We're always looking to expand our support. ## Lidar's only model: Lidar[​](#lidars-only-model-lidar "Direct link to Lidar's only model: Lidar") MetriCal keyword: `lidar` For all intents and purposes, LiDAR intrinsics are usually reliable from the factory. MetriCal currently only supports extrinsics calibration for lidar, making this our simplest model. It assumes there are no range, azimuth, or altitude offsets for any of the beams. This is the default model for all LiDARs. --- # Releases + Changelogs ## Version 14.0.1 - May 21st, 2025[​](#version-1401---may-21st-2025 "Direct link to Version 14.0.1 - May 21st, 2025") ### Overview[​](#overview "Direct link to Overview") This release fixes a number of bugs that cropped up in 14.0.0. ### Changelog[​](#changelog "Direct link to Changelog") #### Fixed[​](#fixed "Direct link to Fixed") * Fixed a bug in our aprilgrid detector to better support interior detections on Camera ←→ LiDAR boards. * Fixed a bug where extrinsics would not be generated in 1 Camera ←→ 1 LiDAR datasets. * Fixed a bug where extrinsics would not be generated in 1 Camera ←→ 1 IMU datasets. ## Version 14.0.0 - May 12th, 2025[​](#version-1400---may-12th-2025 "Direct link to Version 14.0.0 - May 12th, 2025") ### Overview[​](#overview-1 "Direct link to Overview") This release updates licensing to support different subscription tiers (R\&D, Growth, Enterprise), introduces unlicensed calibration, and includes significant refactoring of internals. The unlicensed calibration mode allows users to test and refine their data capture process without needing to purchase a license. Several deprecated features have been removed, and various error code mappings have been updated for better clarity and consistency. ### Changelog[​](#changelog-1 "Direct link to Changelog") #### Added[​](#added "Direct link to Added") * Support for Tangram's new subscription tiers and limits. * Unlicensed calibration mode, allowing users to test and refine their data capture process without purchasing a license. This mode outputs all metrics and diagnostics as usual but hides the final calibration results. * Association metadata is now written to the `results.json` file and required by default on deserialization (breaking change for previous results files). #### Changed[​](#changed "Direct link to Changed") * The `no_offset` model variant in the [Init](/metrical/commands/init.md) command has been renamed to `lidar`. * Mutual observation count between stereo pairs (and the corresponding data diagnostic) now considers all unique observations within a similar sync group, rather than explicit pair-wise overlap of individual targets in the greater object-space. * Consolidated exit code handling for better error reporting. #### Removed[​](#removed "Direct link to Removed") * The IMU `no_intrinsics` model has been removed from [Init](/metrical/commands/init.md) and is no longer supported by MetriCal. * Exit code 11 has been deprecated; MetriCal will now correctly report exit code 1 on IO related errors when using the [Consolidate Object Space](/metrical/commands/consolidate_object_spaces.md) command. * Various deprecated features and arguments including evaluate mode, `--topic-to-component`, and preset devices have been removed. #### Deprecated[​](#deprecated "Direct link to Deprecated") Version 1 license keys (prefixed with `key/`) have been **deprecated and will no longer work after November 1st, 2025**. If you use license keys prefixed with `key/`, please create new license keys (which will be version 2 keys, prefixed with `key2/`) and use them instead. ### Technical Notes[​](#technical-notes "Direct link to Technical Notes") This version makes several breaking changes to the internal architecture of MetriCal, particularly around the handling of licensing and model variants. The new licensing tier system allows for more flexible deployment options tailored to different user needs. The removal of the `evaluate` command represents a significant change to the workflow, but aligns with our focus on providing more accurate and reliable calibration measurements. Users who previously relied on this command should contact Tangram support for guidance on alternative approaches. The mutual observation count improvement will lead to more accurate stereo pair diagnostics, especially in complex multi-camera systems where object-space observations may not perfectly overlap between cameras but still occur within similar sync groups. Note that this release introduces a breaking change for results files generated by previous versions, as the new association metadata is now required during deserialization. ## Version 13.2.1 - April 9th, 2025[​](#version-1321---april-9th-2025 "Direct link to Version 13.2.1 - April 9th, 2025") ### Fixed[​](#fixed-1 "Direct link to Fixed") * Images measurements that don't have enough data to derive a pose from are now filtered out of the optimization, rather than being assigned a "default" pose. ## Version 13.2.0 - April 3rd, 2025[​](#version-1320---april-3rd-2025 "Direct link to Version 13.2.0 - April 3rd, 2025") ### Overview[​](#overview-2 "Direct link to Overview") This release introduces the Power Law camera model, improves camera initialization, and adds new diagnostics to help users identify and fix issues with their calibration data. The Power Law model is particularly effective for cameras with significant distortion, and the new data diagnostics will guide users through common problems during calibration data collection. ### Changelog[​](#changelog-2 "Direct link to Changelog") #### Added[​](#added-1 "Direct link to Added") * New [Power Law](/metrical/calibration_models/cameras.md) camera model, effective for cameras with significant distortion. * Comprehensive data diagnostics to help identify issues in calibration data collection. * New chart: "Observed Camera Range of Motion" to help visualize and diagnose potential projective compensation effects. * Ability to consolidate multiple object spaces using object relative extrinsics with the new `consolidate-object-spaces` mode. #### Changed[​](#changed-1 "Direct link to Changed") * Enhanced camera initialization routine for better handling of heavily distorted images. * Processed Observation Count table now distinguishes between filtering from quality vs. motion. * Console output reorganized into a more readable report format. * Improved error messages for data integrity issues. * All charts now have clear titles for easier reference. * Extrinsics tables are now shown in alphabetical order by component name. #### Fixed[​](#fixed-2 "Direct link to Fixed") * Fixed the rectification tables in console output. * Tuned correction function for OpenCV Fisheye model. * Modified Double Sphere model jacobians for better accuracy. * Fixed timestamp casting error in timeline chart. #### Removed[​](#removed-1 "Direct link to Removed") * Evaluate mode is no more. A better implementation is planned for a future release of MetriCal. ### Technical Notes[​](#technical-notes-1 "Direct link to Technical Notes") This version provides a significant enhancement to diagnostic capabilities, with particular focus on helping users understand why a calibration might be failing. The new data diagnostics system checks for common issues such as: * Insufficient camera movement range * Poor feature coverage across camera FOV * Too many observations filtered by quality or motion * Missing component dependencies required for calibration * Insufficient mutual observations between camera pairs Each diagnostic comes with a detailed explanation and suggestions for improvement, referencing specific charts in the report for additional context. ## Version 13.1.0 - February 25th, 2025[​](#version-1310---february-25th-2025 "Direct link to Version 13.1.0 - February 25th, 2025") ### Overview[​](#overview-3 "Direct link to Overview") This update changes the way MetriCal reports errors and fixes various bugs with init mode. In addition, we have made the decision to deprecate compatibility with both folder and ROS 1 bag datasets. This decision was made because we want to standardize our data ingestion code around MCAP and remove the need to MetriCal to handle idiosyncrasies with the other two input formats. These data formats are supported in 13.1.0, but will be removed in a future release. For ROS1 bags, it is very simple to use the [mcap CLI tool](https://mcap.dev/guides/getting-started/ros-1#convert-to-mcap) to convert them to an MCAP. We had recommended this even before the deprecation was announced, because it greatly improves dataset processing performance on our end. For folder datasets, we will ensure that a suitable conversion tool exists before fully removing support for them in MetriCal. ### Changelog[​](#changelog-3 "Direct link to Changelog") * Added deprecation warnings for folder and ROS 1 bag datasets. * Changed errors to have better messaging and updated errors docs to match. Some exit codes are no longer returned in practice (although their exit codes are not reused). * Foxglove CompressedVideo messages have been added to MCAP schema map * Fix rectification tables in report output * Fix basis transformations in MetriCal * Fixed how init mode consumes URDFs. In particular, init mode now has a resolving strategy that prioritizes plexes first (based on their creation timestamp, newest plex first / oldest plex last), followed by URDFs being applied in order to seed extrinsics if a better extrinsic or spatial constraint is not first provided in an earlier plex or URDF. * Fixed a bug where topic mappings were not applied to seed plexes that were being overwritten (i.e. when the `--overwrite` or `-y` flags were passed to the CLI). * Fixed a bug in how plex creation timestamps were not being preserved when overwriting topic mappings in init mode. * Fixed a bug where no logging would occur if a pipeline file failed to parse ## Version 13.0.0 - January 14th, 2025[​](#version-1300---january-14th-2025 "Direct link to Version 13.0.0 - January 14th, 2025") ### Overview[​](#overview-4 "Direct link to Overview") The major feature of this release is a big speedup (2-3x faster in testing) when processing H.264 datasets. Additionally, this release introduces some breaking changes to the `Markers` fiducial type (now named `SquareMarkers`) and minor ones to the format of some metrics in `results.json`. Both of these changes are fairly niche and we do not expect them to have an impact on the vast majority of users. ### Changelog[​](#changelog-4 "Direct link to Changelog") * **(Breaking Change)** The `Markers` fiducial type has been renamed to `SquareMarkers`. Please update your object space file if you are using this (uncommon) fiducial type. * **(Breaking Change)** Added `marker_ids` field to `SquareMarkers` and removed `marker_length`. * **(Breaking Change)** The `results.json` file no longer contains a `metrics.optimized_object_space` field. Instead users should start using the `object_space` field at the top-level of the results as that now contains the inferred object space. This will not affect users who aren't already doing custom processing of the `results.json` file. * Greatly improve performance when ingesting H.264 datasets ## Version 12.2.0 - December 13th, 2024[​](#version-1220---december-13th-2024 "Direct link to Version 12.2.0 - December 13th, 2024") ### Overview[​](#overview-5 "Direct link to Overview") This release has some nice improvements to camera < - > lidar (especially in multi-target scenarios) as well as greatly improved AprilGrid detection quality. In addition, this release includes some visualization and logging quality of life improvements. ### Changelog[​](#changelog-5 "Direct link to Changelog") #### Added[​](#added-2 "Direct link to Added") * Added an optional `reflective_tape_width` field to the circle board object space format. This is used as a hint to the detector when identifying retroreflective circles in point clouds. #### Changed[​](#changed-2 "Direct link to Changed") * Improved detection quality of multiple circle boards in the same environment. * Made various improvements to the underlying optimization calculations. * Upgraded to Rerun v0.19. * Made various improvements to visualization of lidar and image detections. #### Fixed[​](#fixed-3 "Direct link to Fixed") * Greatly improved detection quality on Kalibr-style AprilGrid targets. * Fixed an issue with detecting image features with only a single adjacent tag. * Addressed some small timestamp-related visualization bugs. #### Removed[​](#removed-2 "Direct link to Removed") * Evaluate mode has been hidden as a command, due to pending improvements. The mechanism that is currently used to perform the metric generation during evaluate mode runs an optimization, which can lead to some hard-to-interpret results. In particular, errors may projectively compensate into the object-space which at present does not have any user-visible metrics readily available outside of the rerun visualization. For more information on this change, please see the ["Evaluate" mode documentation](/metrical/12.2/modes/evaluate). ## Version 12.1.0 - October 23rd, 2024[​](#version-1210---october-23rd-2024 "Direct link to Version 12.1.0 - October 23rd, 2024") ### Overview[​](#overview-6 "Direct link to Overview") This is a small update. The main user-facing change is that MetriCal can now handle markerboards generated by newer versions of OpenCV. OpenCV introduced a silent breaking change to its markerboard generation in version 4.6.0, which can cause problems with MetriCal's detection of certain newer boards under some circumstances. For more information, please reference [the initial\_corner field in the markerboard docs.](/metrical/targets/target_overview.md) ### Changelog[​](#changelog-6 "Direct link to Changelog") ### Changed[​](#changed-3 "Direct link to Changed") * Update internal OpenCV version from 4.5.4 to 4.10.0 ### Added[​](#added-3 "Direct link to Added") * Add support for both OpenCV "new" style markerboards. ## Version 12.0.0[​](#version-1200 "Direct link to Version 12.0.0") Release Page MetriCal Sensor Calibration Utilities repository for v12.0.0: ### Overview[​](#overview-7 "Direct link to Overview") This release is probably one of our largest ever. There are new modes, new mathematics, and more expressivity. And as with every major version bump, we've also made some changes to the CLI arguments. Most old arguments that have changed have been deprecated, not removed, so you can still use your scripts from v11.0 (for the most part). You'll just get loud warnings about switching over before v13 comes around... ### Change Your MetriCal Alias\![​](#change-your-metrical-alias "Direct link to Change Your MetriCal Alias!") First and foremost: If you're a current user, we suggest adding the `--tty` flag to your metrical bash alias. See an example in the [setup docs](/metrical/configuration/installation.md). This allows MetriCal to render progress bars in the terminal, which makes it a much nicer experience when processing large datasets. Otherwise, you'll just see a blank screen for a while. ### UI + UX Highlights[​](#ui--ux-highlights "Direct link to UI + UX Highlights") #### Offline Licensing[​](#offline-licensing "Direct link to Offline Licensing") MetriCal now offers users the option to cache licenses for offline use. If you're running MetriCal "in the field" (aka away from a modem), this is the feature for you! Note that offline licenses are only valid for a week before MetriCal needs to ping a Tangram server. Visit [the licensing docs](/metrical/configuration/license_usage.md) for setup and details. #### Descriptive Error Messages[​](#descriptive-error-messages "Direct link to Descriptive Error Messages") We've reworked every. single. error case in MetriCal to have a descriptive error message. Instead of getting something generic, you'll now see the error, the error's cause, and a helpful suggestion on how to fix it. Many of these suggestions link directly to the documentation, so you don't have to search around for the "right answer" anymore. Miette Is Great For those of you developing your own Rust programs, we couldn't recommend [miette](https://docs.rs/miette/latest/miette/) enough. #### Cached Detections[​](#cached-detections "Direct link to Cached Detections") Tired of re-running detections when running a new calibration? We've got you covered. MetriCal now caches its detections from the initial dataset processing. This means you can change models and test different configurations in a fraction of the processing time. There are two ways MetriCal finds detections: * Passed to the CLI via the $DATA required argument. For instance, instead of passing an MCAP, one could just pass that MCAP's cached detections JSON. * Automatically found in the same directory as the dataset. Cached detections are written to the same directory as the dataset, named with a `.detections.json` extension to the dataset name. For example, `/dataset/example.mcap` would have a cached detections file named `/dataset/example.detections.json`. If MetriCal finds this file, it will use it instead of re-processing the entire dataset. Potential Naming Conflicts This also means that datasets with the same name will produce detection caches with the same name. We don't advise naming all of your files the same thing anyway, but... you do you. If you have cached detections, but really want to re-process them anyway, just pass the `--overwrite`/`-y` flag to Calibrate or Evaluate mode. #### Multiple Seed Plex in Init Mode[​](#multiple-seed-plex-in-init-mode "Direct link to Multiple Seed Plex in Init Mode") You can now pass multiple seed plex to Init mode. This is useful in a "big rig" scenario, when it's just not feasible to collect every sensor's data in one run. For example, one might run data from 4 cameras individually, then combine them all into one system to gather extrinsics via Init mode. When processing multiple seed plex, the newest plex is given priority, being processed for new information from newest to oldest. If there is an existing Init'd plex for the dataset, that plex is also used as a seed. This makes it even easier to compare and contrast different calibrations. #### Display Mode[​](#display-mode "Direct link to Display Mode") Version 12.0 also introduces Display mode, which allows you to visualize the applied calibration to any dataset. This takes the place of the `--render` flag in Calibrate and Evaluate modes, which now just renders the detections used for calibration. Important: Display mode expects a running instance of Rerun v0.18. ### Changelog[​](#changelog-7 "Direct link to Changelog") #### Added[​](#added-4 "Direct link to Added") * CLI: * Change the logging level more easily with verbose flags: `-v` (Debug), `-vv` (Trace), `-q` (Warn), or `-qq` (Error). * Multi-progress bar to estimate observation read-in time. Make sure to add `-t` to a Docker alias to use this functionality! * Offline licensing. See [the licensing docs](/metrical/configuration/license_usage.md) for setup and details. * An `OptimizationProfile` argument to Calibrate mode that allows users to tune the parameters of the adjustment e.g. relative error threshold, absolute error threshold, and max iterations. * A binned outlier count is now reported in the camera binned reprojection table. * Display mode to visualize the output of a calibration. * A new sub-mode, `metrical shape tabular` which can convert a plex into a simplified, stable, tabular format that holds a set of intrinsics (and direct artefacts of the intrinsics, such as look-up tables) alongside extrinsics. The tabular format from this mode can be exported as either JSON or MsgPack binary (to compress the total filesize, as LUTs can be quite large). * Tables for Component Relative Extrinsics, as well as Preintegrated IMU errors. This should give you a much better idea of extrinsics quality beyond the abstract covariance values for spatial constraints. * Data I/O: * Expanded list of encodings for YUYV image types in ROS - `uyvy`, `UYVY`, `yuv422`, `yuyv`, `YUYV`, and `yuv422_yuy2` are all supported. * That's gotta be all of them, right? Right? Let's all come together to make life easier for your old Tangram pals. * H264 message types for ROS1 (from [this project](https://github.com/orchidproject/x264_image_transport)) and MCAP (from [Foxglove's CompressedVideo](https://docs.foxglove.dev/docs/visualization/message-schemas/compressed-video/)). * Added support to `CompressedImage` types for the alternate spelling for "[jpg](https://www.youtube.com/watch?v=jmaUIyvy8E8\&t=0s)". * Detection and reweighting of Paired 3D Point outliers. #### Changed[​](#changed-4 "Direct link to Changed") * Algorithm Changes: * Use new M-estimators, and remove all explicit outlier logic. No more outlier flags! * Paired Plane Normals between lidar-lidar pairs are no longer used * The motion filter has been rewritten to include both images and lidar detections. Users can manipulate the motion detector thresholds by using the `--camera-motion-threshold` and `--lidar-motion-threshold` flags, or just turn it off with `--disable-motion-filter`. * The motion filter now runs *after* all detections have been processed. This means that you can manipulate the motion filter over cached detections, too. * CLI: * `UmbraOutput` is now `CalibrationOutput`. * All error codes have been changed in favor of simpler error code and more descriptive fixes. * Look Up Tables generated via the `metrical shape lut` command are now generated using the `image_luts` module from the applications crate. This changes the final schema for look up tables but makes them more directly compatible with OpenCV and other software that expects both row and column remappings to be separated. * Init command: * Init mode can now take multiple plex as seeds. * If a previously generated init plex is found at the same location as the plex-to-be-written, then the previous plex is used as a seed for the new plex. * Change of basis is respected and applied in Init mode when seeding the plex with an input plex. * Rendering: * Observations and detections are rendered in the same entity in the visualization. * When rendering more than one point cloud in Display mode, each component's point clouds are uniformily colored rather than colored by the point's intensity value. This allows for easier comparison between components. * Calibrate mode no longer renders the results of a calibration, only the observations and detections. Results are applied in Display mode. #### Fixed[​](#fixed-4 "Direct link to Fixed") * CLI: * Component names with "/" no longer cause file I/O issues with the `shape focus` command * Detectors: * The circle detector for point clouds and the feature detector for images are now much more robust to misdetections and noise. * Init command: * Init mode now correctly handles all (spec'd) cases in which a seed plex as the basis for a new plex. * Topics that are not usable by MetriCal for the purposes of calibration are now filtered out. * If there are no usable topics, Init mode will list the usable topics in a dataset for the user. * There was a (self-induced) memory pressure build-up during lidar detection that has been remedied. #### Removed[​](#removed-3 "Direct link to Removed") * Init command: * Preset devices have been removed from Init mode. ## Version 11.0.0[​](#version-1100 "Direct link to Version 11.0.0") Release Page MetriCal Sensor Calibration Utilities repository for v11.0.0: ### Overview[​](#overview-8 "Direct link to Overview") This version bump is a big one! Data processing improvements, algorithmic improvements, CLI simplification... there's a little something for everyone here. Most of the changes in 11.0.0 were made based on testing from customers in the field. Thanks, everyone! Note that many CLI arguments have been shifted or removed. Be sure to check the changelog and updated documentation to make sure your settings remain. ### Changelog[​](#changelog-8 "Direct link to Changelog") #### Added[​](#added-5 "Direct link to Added") * Visualization improvements: * Render navigation states and IMU data by default. * Render all observations when log level is set to debug. * Change of Basis tooling: * Add `--topic-to-observation-basis` global argument to all modes in order to map a coordinate basis to each component's observations on plex construction ([docs](/metrical/commands/commands_overview.md#universal-options)). * Add `--topic-to-component-basis` global argument to all modes in order to map a coordinate basis to each component on plex construction ([docs](/metrical/commands/commands_overview.md#universal-options)). * Changed-basis plex is now included in MetriCal output ([docs](/metrical/core_concepts/constraints.md#to-and-from)). #### Changed[​](#changed-5 "Direct link to Changed") * Init mode only looks through the first 100 observations to create the initial plex; this should greatly speed up Init mode for most datasets. * All observations are now run through their respective detectors before image motion filtering occurs. * The motion filter acts on the image detections themselves, not the entire image. This should improve the quality of the motion filter in busy, "noisy" datasets ([docs](/metrical/commands/calibrate.md#image-motion-filtering)). * A user-specified root is now required to generate the URDF from a plex in Shape mode ([docs](/metrical/commands/shape/shape_urdf.md)). * Reformulate IMU calibration approach: * Remove unnecessary IMU initialization. * Implement improvements to IMU optimization mathematics. * Only one IMU bias is inferred during calibration instead of a wandering bias. * Shape mode arguments now follow the order `metrical shape [command] [arguments] [plex] [output]` ([docs](/metrical/commands/shape/shape_overview.md)). * `license` and `report-path` arguments are now global arguments, and can be passed to any mode ([docs](/metrical/commands/commands_overview.md)). * Lower the minimum required points to fit a circle detector. #### Fixed[​](#fixed-5 "Direct link to Fixed") * Init mode now properly handles differences in the seeded plex (passed through with `-p`) and the created plex. This has been a bug for longer than we care to admit; we're glad it's fixed ([docs](/metrical/commands/init.md))! * Observations are no longer held in memory while motion filtering occurs. This greatly reduces memory usage on LiDAR-heavy datasets. #### Removed[​](#removed-4 "Direct link to Removed") * Caching of detections and filtered output is no longer supported in Calibrate and Evaluate modes. ## Version 10.0.0 (Yanked)[​](#version-1000-yanked "Direct link to Version 10.0.0 (Yanked)") Yanked! This release was yanked due to a critical bug introduced in [Init command](/metrical/commands/init.md) that wasn't caught in time. The odds of many users running into this bug were low, but we played it safe and yanked this version entirely. All relevant changes will be added to the changelog for 11.0.0. ## Version 9.0.0[​](#version-900 "Direct link to Version 9.0.0") Release Page MetriCal Sensor Calibration Utilities repository for v9.0.0: ### Overview[​](#overview-9 "Direct link to Overview") Version 9.0.0 can be considered a refinement of v8.0, with focus on improving the user experience and clarifying outputs. It also introduces a few new intrinsics models across components. Some default behavior has changed as well, so be sure to check the changelog for details. This release also includes a big update to rendering. Be sure to update your Rerun version to v0.14! ### Changelog[​](#changelog-9 "Direct link to Changelog") #### Added[​](#added-6 "Direct link to Added") * Introduced `LidarSummary` summary statistics to report lidar-specific metrics ([docs](/metrical/results/report.md#ss-3-lidar-summary-statistics)). * Support for new IMU intrinsics: Scale, Shear, Rotation, and G-Sensitivity ([docs](/metrical/calibration_models/imu.md)). * Support for the Omnidirectional camera model ([docs](/metrical/calibration_models/cameras.md#omnidirectional-omni)). #### Changed[​](#changed-6 "Direct link to Changed") * MetriCal now uses Rerun v0.14! 🎊 Make sure to update your version of Rerun accordingly. * The summary statistics table is now three tables, for optimization, cameras, and lidar respectively ([docs](/metrical/results/report.md#output-summary)). * `PerComponentRMSE` in Summary Statistics is now `CameraSummary` ([docs](/metrical/results/report.md#ss-2-camera-summary-statistics)). * Circle detector's `detect_interior_points` option is now a mandatory variable, and has no default value ([docs](/metrical/targets/target_overview.md)). * Circle detector now takes an `x_offset` and `y_offset` variable to describe the center of the circle w\.r.t. the full board frame ([docs](/metrical/targets/target_overview.md)). * Object relative extrinsics (OREs) are now generated by default. In turn, the `--enable-ore-inference` flag has been removed and replaced with `--disable-ore-inference` ([docs](/metrical/commands/calibrate.md#--disable-ore-inference)). * The camera component initialization process during calibration has been improved to better handle significant distortion. #### Fixed[​](#fixed-6 "Direct link to Fixed") * Rerun rendering code has been completely refactored for user clarity and speed of execution. * Lidar-lidar datasets are now rendered and registered along with camera-lidar. * Object relative extrinsics are now rendered when available. * Images now use lookup tables properly for quick correction. * Spaces have been reorganized for clarity and ease of use. * Datasets without cameras no longer print empty camera tables. ## Version 8.0.1[​](#version-801 "Direct link to Version 8.0.1") Release Page MetriCal Sensor Calibration Utilities repository for v8.0.1: ### Overview[​](#overview-10 "Direct link to Overview") This version fixes a small bug found in pipeline license validation. ### Changelog[​](#changelog-10 "Direct link to Changelog") #### Fixed[​](#fixed-7 "Direct link to Fixed") * `null` license values in a pipeline configuration are discarded, not interpreted as a provided license key. *** ## Version 8.0.0[​](#version-800 "Direct link to Version 8.0.0") Release Page MetriCal Sensor Calibration Utilities repository for v8.0.0: ### Overview[​](#overview-11 "Direct link to Overview") This release brings a ton of new features to the MetriCal CLI, most of them focused on improving the user experience. The biggest difference is one you won't see: all of the math done during optimization is now fully sparse, which means it takes a *lot* less memory to run a calibration. And smart convergence criteria means that calibrations are faster, too! We've also added a new mode, `pipeline`, which allows you to run a series of commands in serial. Find out more about it in the [Pipeline Command](/metrical/commands/pipeline.md) section of the documentation. ### Changelog[​](#changelog-11 "Direct link to Changelog") #### Added[​](#added-7 "Direct link to Added") * Pipeline mode. This executes a series of Commands in serial, as they're written in a pipeline JSON file. * Render the optimized plex at the end of a calibration. * The subplex ID for a spatial constraint now shows up in the Extrinsics table. * Input JSON files with comments are now accepted as valid. #### Changed[​](#changed-7 "Direct link to Changed") * All calibrations now undergo outlier detection and reweighting as part of the BA process. Outliers are detected for cameras, lidar, and relative extrinsics. * Summary table is sorted by component name, not by UUID. * Summary statistics in console are now computed using a weighted RMSE. * The bundle adjustment is now a fully sparse operation, relieving memory pressure on larger datasets. #### Fixed[​](#fixed-8 "Direct link to Fixed") * The height of the sync group chart now adjusts with the number of components present in the dataset. * Bug in Init mode when using a RealSense435Imu preset. * All stereo pairs are now derived and graphed in console output #### Removed[​](#removed-5 "Direct link to Removed") * The `--metrics-with-outliers` flag and the `--outlier-filter` flag have been removed. *** ## Version 7.0.1[​](#version-701 "Direct link to Version 7.0.1") Release Page MetriCal Sensor Calibration Utilities for v7.0.1: ### Overview[​](#overview-12 "Direct link to Overview") This patch release fixes various errata found in the [v7.0.0 release](https://gitlab.com/tangram-vision/platform/metrical/-/releases/v7.0.0). ### Changelog[​](#changelog-12 "Direct link to Changelog") #### Fixed[​](#fixed-9 "Direct link to Fixed") * Rendering the correction at the end of a calibration actually uses the corrected plex (rather than the input plex). * The extrinsics table now only shows delta values if an input plex is provided. * Camera-lidar extrinsics rendering in Rerun now takes the spatial constraint with the minimum covariance. This is different from the previous behavior where all spatial constraints were rendered, regardless of quality. *** ## Version 7.0.0[​](#version-700 "Direct link to Version 7.0.0") Release Page MetriCal Sensor Calibration Utilities for v7.0.0: ### Overview[​](#overview-13 "Direct link to Overview") Welcome, v7.0.0! Yes, merely a week after v6.0.0, we bump major versions. Classic. #### Revised Documentation[​](#revised-documentation "Direct link to Revised Documentation") It's finally here! The revised Tangram Vision documentation site is live: . This documentation site holds readmes and tutorials on all of Tangram Vision's products. The MetriCal section is fully-featured, and based on this release. We will be maintaining all documentation for this and newer versions on the official docs site from here on out. #### LiDAR-LiDAR Calibration[​](#lidar-lidar-calibration "Direct link to LiDAR-LiDAR Calibration") MetriCal now supports LiDAR-LiDAR calibration, no cameras needed. Users will need a [lidar circle target](/metrical/targets/target_overview.md) to calibrate LiDAR. #### New Modes - `pretty-print` and `evaluate`[​](#new-modes---pretty-print-and-evaluate "Direct link to new-modes---pretty-print-and-evaluate") v7.0.0 introduces two new modes: * Pretty Print does what it says on the tin: prints the plex or results of a calibration in a human-readable format. This is useful for debugging and for getting a quick overview of the calibration. Docs: [https://docs.tangramvision.com/metrical/commands/pretty\_print/](/metrical/13.2/modes/pretty_print) * Evaluate can apply a calibration to a given dataset and produce metrics to validate the quality of the calibration. Docs: [https://docs.tangramvision.com/metrical/commands/evaluate/](/metrical/12.0/modes/evaluate). The Calibrate mode, by extension, no longer has an `--evaluate` flag. This is just Evaluate mode. #### Revamped Rendering options + Deprecated Review mode[​](#revamped-rendering-options--deprecated-review-mode "Direct link to Revamped Rendering options + Deprecated Review mode") Review mode is no longer! Instead, passing the `--render` flag to either Calibrate or Evaluate mode will render the corrected calibration at the end of the run. Also, `--render` and `--render-socket` options are no longer global. Instead, they are only applicable for Calibrate and Evaluate modes. ### Changelog[​](#changelog-13 "Direct link to Changelog") #### Added[​](#added-8 "Direct link to Added") * Support for calibrating multi-LiDAR, no-camera datasets. This still leverages MetriCal's circle target, but no longer requires a camera or any image topics to be present in order to calibrate. * Pretty Print mode for printing out a plex in a human readable format. * Evaluate mode to evaluate the quality of a calibration on a test dataset. This is a reinterpretation of the `--evaluate` flag that was in the Calibrate mode; it's just been given its own command for ease of use. * A `verbose` flag to the shape mode to print the plex that has been created from the command. * Additional descriptions to errors that can be generated by calling the calibrate mode. #### Changed[​](#changed-8 "Direct link to Changed") * Calibrate + Evaluate mode now renders its own corrections (rather than punting that capability to review). * The `--render` flag at the global level has been moved to the Calibrate & Evaluate modes. #### Removed[​](#removed-6 "Direct link to Removed") * Review mode and README mode have been removed completely. Review mode's previous functionality is now split between Pretty Print mode and Calibrate mode. * The `--evaluate` flag in Calibrate mode. #### Fixed[​](#fixed-10 "Direct link to Fixed") * Summary statistics tables now have the correct units displayed alongside their quantities. * Printing the results of a plex with no extrinsics will now print an empty table, rather than nothing at all. * Nominal extrinsics deltas in tables are now represented by the string "--". #### Errata[​](#errata "Direct link to Errata") * The "corrected" rendering at the end of a calibration run mistakenly uses the input plex, not the output plex. Scheduled fix: v7.0.1. * The output extrinsics table does not correctly calculate the delta between the input and output plex. Scheduled fix: v7.0.1. * Rendering the corrected camera-lidar registration would take the first spatial constraints available, rather than taking the constraint with the minimum covariance. This often makes calibrations appear wildly incorrect, despite a good calibration. Scheduled fix: v7.0.1. --- # Calibrate Mode ## Purpose[​](#purpose "Direct link to Purpose") * Calibrate a multi-modal system based on pre-recorded data. * Visualize the calibration data capture and detection process. ## Usage[​](#usage "Direct link to Usage") ``` metrical calibrate [OPTIONS] \ \ \ ``` ## Concepts[​](#concepts "Direct link to Concepts") The Calibrate command runs a full bundle adjustment over the input calibration data. It requires three main arguments: * The input data * An inital plex (usually derived from [Init command](/metrical/commands/init.md)). This represents a naive guess at the state of your system. * An object space file. This tells MetriCal what targets to look for during calibration. We mean it when we say this runs a *full* bundle adjustment. MetriCal will optimize both the plex values (the calibration of your system, effectively) *and* the object space values. This means that MetriCal is robust to bent boards, misplaced targets, etc. All of these values will be solved for during the adjustment, and your calibration results will be all the better for it. We use ANSI terminal codes for colorizing output according to an internal assessment of metric quality. Generally speaking: * Cyan: spectacular * Green: good * Orange: okay, but generally poor * Red: bad Note that these judgements are applied internally to the software and have limits that may change release-to-release. We advise that you determine your own internal limits for each independent metric. ### Cached Detections[​](#cached-detections "Direct link to Cached Detections") Extracting detections from a dataset usually takes the lion's share of time during a run. To make this less onerous on subsequent runs, MetriCal will automatically create a cache of detections (in JSON format) and save them to the same directory as the input dataset. #### Preserving UUIDs for Cached Detections[​](#preserving-uuids-for-cached-detections "Direct link to Preserving UUIDs for Cached Detections") Detections rely on matching UUIDs between its data and the components in the plex to correctly attribute observations. This means that **generating a new Init plex from scratch will require new detections**, since the UUIDs will have changed. MetriCal does its best to preserve UUIDs between Init runs by using any previous Init plex as a seed. This allows one to play with different models without having to re-run the detection stage of the calibration. ### Motion Filtering[​](#motion-filtering "Direct link to Motion Filtering") MetriCal runs a motion filter over any and all images and point clouds in the input dataset. This filter removes any features that might be affected by motion blur, rolling shutter, false detections, or other artifacts. This is a critical step in the calibration process, as it ensures that the data used for calibration is the best we can get. There are three states that the motion filter can assign to a component: * **In Motion**: The movement between subsequent detections is above the motion threshold for this modality. * **Still**: The movement between subsequent detections is below the motion threshold for this modality. * **Still (Redundant)**: The component is Still *and* the movement between detections is small enough that they can be considered the same detection. The motion filter works at the sync group level, not component-by-component. You can see how this decision affects the filter's behavior over time. If one component is moving, the motion filter labels all other components as moving as well, regardless of their actual detected motion status. This decision makes intuitive sense: in a calibration dataset, all components (or all object spaces) should be moving together. If you have a dataset that is a series of snapshots, rather than continuous motion, you can disable the motion filter using the `--disable-motion-filter` flag. ![Motion States over time](/assets/images/motion_state_timeline_v12-b70ea6416820079f7f96125780af6ac3.png) #### Image Motion Filtering[​](#image-motion-filtering "Direct link to Image Motion Filtering") The image motion filter is based on the average flow of detected features between subsequent frames. Background motion is not considered, so you can still use this filter when taking data in a busy place. * Any motion that results in an average pixel shift higher than the camera motion threshold (`--camera-motion-threshold`) is considered **In Motion**. The default value for the camera motion threshold is 1 pixel/frame, but this can be adjusted using the flag. * Any motion that results in an average pixel shift lower than the camera motion threshold is considered **Still**. * If two frames are both still and the second frame's features have shifted less than 1 pixel on average, the second frame is considered **Still (Redundant)**. These redundant frames add little information to the calibration process and are removed. ![Motion States for Cameras](/assets/images/motion_state_camera_v12-dff1edfacd3ba24b06f37f1379577fa7.png) #### Lidar Motion Filtering[​](#lidar-motion-filtering "Direct link to Lidar Motion Filtering") The lidar motion filter is based on the movement of the detected center point of a [Circle Target](/metrical/targets/target_overview.md) between subsequent observations. * Any motion that results in the center point moving more than the lidar motion threshold is considered **In Motion**. The default value for lidar is 0.1m (a generous threshold!). * Any motion that results in the center point moving less than the lidar motion threshold is considered **Still**. * There is no **Still (Redundant)** state for lidar data. ![Motion States for Lidar](/assets/images/motion_state_lidar-e8c857bdb448f715cead0e0c0e9235cf.png) ### Extracting Optimized Values from a Results JSON[​](#extracting-optimized-values-from-a-results-json "Direct link to Extracting Optimized Values from a Results JSON") There is a *lot* of data in a results JSON output from MetriCal. Luckily, it's all JSON, and it's fairly easy to pick apart. Use [`jq`](https://jqlang.github.io/jq/) to extract this data into its own file for easy analysis and comparison to the original inputs. Install jq using apt: ``` sudo apt install jq ``` Then use jq to extract the relevant optimized data. ``` jq .plex results.json > optimized_plex.json jq .object_space results.json > optimized_obj.json ``` ## Examples[​](#examples "Direct link to Examples") #### Run a calibration[​](#run-a-calibration "Direct link to Run a calibration") > ``` > metrical calibrate \ > --output-json results.json \ > $DATA $PLEX $OBJSPC > ``` #### Visualize detections[​](#visualize-detections "Direct link to Visualize detections") > Make sure you have a Rerun server running! > > ``` > metrical calibrate \ > --render \ > --output-json results.json \ > $DATA $PLEX $OBJSPC > ``` #### Remap topics to different components in a plex[​](#remap-topics-to-different-components-in-a-plex "Direct link to Remap topics to different components in a plex") > With the `--topic-to-component` flag, you can map topics to component names or UUIDs in the input plex. This prevents laborious hand-tuning of a plex JSON whenever a dataset is recorded with a different topic name. > > For example, if we called our folder `camera_1` and that corresponded to some component named `ir_SN5XXX` in the plex, then we could do the following: > > ``` > metrical calibrate \ > --topic-to-component camera_1:ir_SN5XXX \ # our corrected topic-component relation > --output-json results.json \ # the output file > $DATA $PLEX $OBJSPC > ``` #### Save calibration log output[​](#save-calibration-log-output "Direct link to Save calibration log output") > Log output from the program is output on `stderr`. If you want to record this for later, you can actually save a log by using the `--report-path` flag. This will save a .log file and, if possible, a HTML version of the log as well. > > ``` > metrical calibrate --report-path metrical.log $DATA $PLEX $OBJSPC > ``` #### Disable the motion filter for motion-heavy datasets[​](#disable-the-motion-filter-for-motion-heavy-datasets "Direct link to Disable the motion filter for motion-heavy datasets") > By default, MetriCal will run a motion filter over the input dataset and remove observations with too much motion, i.e. where it appears that relative motion was too large. For observation streams with inconsistent or infrequent observations, this can result in filtering everything! It can also be a sign of malformed timestamps. > > Filtering is recommended when running a dataset with generally quick observation frequencies (>15Hz) and consistent timestamps. Otherwise, you should disable the filter with the [`--disable-motion-filter`](#--disable-motion-filter) flag. > > ``` > metrical calibrate --disable-motion-filter $DATA $PLEX $OBJSPC > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_DATA\_OR\_DETECTIONS\_PATH][​](#input_data_or_detections_path "Direct link to \[INPUT_DATA_OR_DETECTIONS_PATH]") > The dataset with which to calibrate, or the detections from an earlier run. Users can pass: > > 1. ROS1 bags, in the form of a `.bag` file. > 2. MCAP files, with an `.mcap` extension. > 3. A top-level directory containing a set of nested directories for each topic. > 4. A detections JSON with all the cached detections from a previous run. > > In all cases, the topic/folder name must match a named component in the plex in order to be matched correctly. If this is not the case, there's no need to edit the plex directly; instead, one may use the [`--topic-to-component`](#-m---topic-to-component-topic-to-component) flag. #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. #### \[OBJECT\_SPACE\_OR\_RESULTS\_PATH][​](#object_space_or_results_path "Direct link to \[OBJECT_SPACE_OR_RESULTS_PATH]") > A path pointing to a description of the object space for the adjustment. This can be a MetriCal results JSON or an object space JSON. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every command, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -y, --overwrite-detections[​](#-y---overwrite-detections "Direct link to -y, --overwrite-detections") > Overwrite the detections at this location, if they exist. #### --override-diagnostics[​](#--override-diagnostics "Direct link to --override-diagnostics") > When this flag is set to true, a calibration dataset that presents data diagnostics errors will still output a calibration result. #### --licensing-timeout[​](#--licensing-timeout "Direct link to --licensing-timeout") > **Default: 10 (seconds)** > > The timeout (in seconds) for the calls to the Tangram licensing server. This may be useful if you are calibrating on a machine with spotty or slow internet access. #### --disable-motion-filter[​](#--disable-motion-filter "Direct link to --disable-motion-filter") > Disable data filtering based on motion from any data source. This is useful for datasets that are a series of snapshots, rather than continuous motion. #### --camera-motion-threshold \[CAMERA\_MOTION\_THRESHOLD][​](#--camera-motion-threshold-camera_motion_threshold "Direct link to --camera-motion-threshold \[CAMERA_MOTION_THRESHOLD]") > **Default: 1.0 (pixels/observation)** > > This threshold is used for filtering camera data based on detected feature motion in the image. An image is considered "still" if the average delta in features between subsequent frames is below this threshold. The units for this threshold are in pixels/frame. #### --lidar-motion-threshold \[LIDAR\_MOTION\_THRESHOLD][​](#--lidar-motion-threshold-lidar_motion_threshold "Direct link to --lidar-motion-threshold \[LIDAR_MOTION_THRESHOLD]") > **Default: 0.1 (meters/observation)** > > This threshold is used for filtering lidar data based on detected feature motion in the point cloud's detected circle center. A point cloud is considered "still" if the average delta in metric space between subsequent detected circle centers is below this threshold. The units for this threshold are in meters/observation. #### -o, --results-path \[RESULTS\_PATH][​](#-o---results-path-results_path "Direct link to -o, --results-path \[RESULTS_PATH]") > **Default: path/to/dataset/\[name\_of\_dataset].results.json** The output path to save the final results of the program, in JSON format. #### --optimization-profile \[OPTIMIZATION\_PROFILE][​](#--optimization-profile-optimization_profile "Direct link to --optimization-profile \[OPTIMIZATION_PROFILE]") > **Default: standard** > The optimization profile of the bundle adjustment. This is a high-level setting that controls the optimization's maximum iterations, relative cost threshold, and absolute cost threshold. > > Possible values: > > * **performance**: Use this argument when speed is necessary > * **standard**: This is a balanced profile between speed and accuracy > * **minimize-error**: This profile maximizes accuracy. For some small datasets, this profile may be just as fast as the standard profile. However, for larger datasets, this profile may take significantly longer to converge #### --opt-max-iterations \[OPT\_MAX\_ITERATIONS][​](#--opt-max-iterations-opt_max_iterations "Direct link to --opt-max-iterations \[OPT_MAX_ITERATIONS]") > The maximum iteration count on the bundle adjustment. This is a hard limit on the number of iterations for the bundle adjustment. If the optimization does not converge within this number of iterations, the optimization will stop and return the current state of the optimization. #### --opt-relative-threshold \[OPT\_RELATIVE\_THRESHOLD][​](#--opt-relative-threshold-opt_relative_threshold "Direct link to --opt-relative-threshold \[OPT_RELATIVE_THRESHOLD]") > The relative cost termination threshold for the bundle adjustment. "Relative cost" is the difference in cost between iterations. If the relative cost between iterations is below the threshold, the calibration will exit #### --opt-absolute-threshold \[OPT\_ABSOLUTE\_THRESHOLD][​](#--opt-absolute-threshold-opt_absolute_threshold "Direct link to --opt-absolute-threshold \[OPT_ABSOLUTE_THRESHOLD]") > The absolute cost termination threshold for the bundle adjustment. "Absolute cost" is the value of the cost at a given iteration of the optimization. In practice, this value is hardly ever reached; the optimization will hit a local minima well above this value and exit #### -m, --topic-to-component \[TOPIC-TO-COMPONENT][​](#-m---topic-to-component-topic-to-component "Direct link to -m, --topic-to-component \[TOPIC-TO-COMPONENT]") > A mapping of ROS topic/folder names to component names/UUIDs in the input plex. > MetriCal only parses data that has a topic-component mapping. Ideally, topics and components share the same name. However, if this is not the case, use this flag to map topic names from the dataset to component names in the plex. #### --preserve-input-constraints[​](#--preserve-input-constraints "Direct link to --preserve-input-constraints") > Preserve the spatial constraints that were input for this dataset. > > The default behavior is to discard any constraints that were not derived from the data (e.g. spatial constraints from sync groups) or optimized. Pass this flag if the output plex should include any constraints that were input but not found or optimized. #### --disable-ore-inference[​](#--disable-ore-inference "Direct link to --disable-ore-inference") > Disable inference of object relative extrinsic constraints in the bundle adjustment. > > The default behavior is to infer object relative extrinsic constraints in the bundle adjustment. Pass this flag if object relative extrinsic constraints should not be inferred; this may save a substantial amount of time in datasets using many object spaces. > > It's generally expected that disabling ORE inference will not affect calibration accuracy. The main downside to setting this flag is that MetriCal will not output the object space intrinsics as part of the calibration results, which may disable visualization of the target poses when using `metrical display`. #### -r, --render[​](#-r---render "Direct link to -r, --render") > Whether to visualize the detections using Rerun. Run > > ``` > rerun --memory-limit=1GB > ``` > > ...in another process to start a visualization server in Rerun. Read more about configuring Rerun [here](/metrical/configuration/visualization.md). #### --render-socket \[RENDER\_SOCKET][​](#--render-socket-render_socket "Direct link to --render-socket \[RENDER_SOCKET]") > The web socket address on which Rerun is listening. This should be an IP address and port number separated by a colon, e.g. `--render-socket="127.0.0.1:3030"`. By default, Rerun will listen on socket `host.docker.internal:9876`. If running locally (not via Docker), Rerun's default port is `127.0.0.1:9876` > > When running Rerun from its CLI, the IP would correspond to its `--bind` option and the port would correspond to its `--port` option. --- # Errors & Troubleshooting MetriCal logs various error codes during operation or upon exiting. The documentation here catalogs these errors, along with descriptions (🔍) and troubleshooting (🔧) steps. While there are a large number of errors that can occur for various reasons, we do our best to keep this aligned with the current collection of error codes and causes that we find in our products. If we've missed something here, or if you need additional clarification, please contact us at . ## Error Codes in Logs[​](#error-codes-in-logs "Direct link to Error Codes in Logs") During operation, error codes may be printed to the log (stderr) as informational warnings or as errors that caused MetriCal to fail and exit. These are usually coded as `cal-calibrate-001` or `license-002` or something similar. In addition to these codes, there are usually anchors or sections in the docs that relate to each individual code. ## MetriCal Exit Codes[​](#metrical-exit-codes "Direct link to MetriCal Exit Codes") When exiting, MetriCal will print the exit code and return it from the process. The table below lists the exit codes and a description of what each code means. | Exit Code | Error Type | Notes | | --------- | --------------------------------- | ------------------------------------------------------------------ | | 0 | Success, No Errors | Good Job! | | 1 | IO | Filesystem and Input/Output related errors | | 2 | Cli | Command Line Interface related errors | | 3 | ShapeMode | [Shape](/metrical/commands/shape/shape_overview.md) related errors | | 4 | InitMode | [Init](/metrical/commands/init.md) related errors | | 5 | *Pretty Print mode errors* | Historical exit code, unused | | 6 | MetriCalLicense | License related errors | | 7 | *Display mode errors* | Historical exit code, unused | | 8 | CalibrateMode | [Calibrate](/metrical/commands/calibrate.md) related errors | | 9 | *Completion mode errors* | Historical exit code, unused | | 10 | Rendering | Rendering related errors | | 11 | *Consolidate object space errors* | Historical exit code, unused | | 12 | Diagnostic | A High-Level Diagnostic was thrown | | 255 | Other | Other unspecified errors (u8::MAX = 255) | ## Error Sub-Codes[​](#error-sub-codes "Direct link to Error Sub-Codes") Many codes have sub-codes that describe the specific error that occurred. Find a list of relevant sub-codes below. ### Code 1: IO Errors[​](#code-1-io-errors "Direct link to Code 1: IO Errors") This section contains information related to the various errors that can occur due to accessing the filesystem. Many of these issues may relate to the fact that MetriCal is often run inside of a Docker container. As such, be sure to check that the specific path related to the error is correctly [mounted as a volume inside of MetriCal](/metrical/configuration/installation.md#use-metrical-with-docker). #### File Read Failure (cal-io-001)[​](#cal-io-001 "Direct link to File Read Failure (cal-io-001)") > 🔍 Failed to locate any file at the provided path. > > 🔧 If you are operating MetriCal in a Docker instance, make sure you are using the mounted Docker work directory, not your `$HOME` or local paths. #### Memory Map Failure (cal-io-002)[​](#cal-io-002 "Direct link to Memory Map Failure (cal-io-002)") > 🔍 Failed to memory map the provided data set. > > 🔧 This usually would fail if you do not have read permissions on the file in question (usually an MCAP file). #### Malformed MCAP (cal-io-003)[​](#cal-io-003 "Direct link to Malformed MCAP (cal-io-003)") > 🔍 The MCAP file provided could not be read. > > 🔧 Usually this indicates that while the file path looks like an MCAP file, the file itself does not conform to the MCAP specification (see ), or is corrupt in some way. #### Malformed ROSbag (cal-io-004)[​](#cal-io-004 "Direct link to Malformed ROSbag (cal-io-004)") > 🔍 The ROSbag file provided could not be read. > > 🔧 Similar to [cal-io-003](#cal-io-003), the file path appears as a ROS 1 Bag v2.0, but does not conform to the ROS bag specification or is corrupt in some way. #### Malformed Plex Or Results (cal-io-005)[​](#cal-io-005 "Direct link to Malformed Plex Or Results (cal-io-005)") > 🔍 The provided path, when read, was not recognized as a valid plex. > > 🔧 Acceptable files include either a self-contained plex or a MetriCal results file. See [our plex documentation](/metrical/core_concepts/plex_overview.md) for the correct JSON Schema for plex files. #### Malformed Object Space Or Results (cal-io-006)[​](#cal-io-006 "Direct link to Malformed Object Space Or Results (cal-io-006)") > 🔍 The provided path, when read, was not recognized as a valid object space. > > 🔧 Acceptable files include either a self-contained object space or a MetriCal results file. See [our object space documentation](/metrical/core_concepts/object_space_overview.md) for the correct JSON Schema for object-space files. #### Malformed Pipeline (cal-io-007)[​](#cal-io-007 "Direct link to Malformed Pipeline (cal-io-007)") > 🔍 The provided pipeline file is malformed. > > 🔧 If you need help structuring your pipeline JSON, log in debug mode (-v). The JSON for the current command will print to the console. #### Invalid Seed Plex (cal-io-008)[​](#cal-io-008 "Direct link to Invalid Seed Plex (cal-io-008)") > 🔍 The seed file provided could not be parsed as either a plex, results file, or URDF. > > 🔧 The file that was provided was not recognized as a valid plex, MetriCal results file, or ROS URDF. See [our plex documentation](/metrical/core_concepts/plex_overview.md) for the correct JSON Schema for plex files. If using a results file, beware that we do not guarantee compatibility of results files between versions of MetriCal. #### Folder No Components (cal-io-009)[​](#cal-io-009 "Direct link to Folder No Components (cal-io-009)") > 🔍 Failed to read any component folders in the data set. > > 🔧 Check your component names. Do they match your observation folder names? #### Folder Filenames Not Timestamps (cal-io-010)[​](#cal-io-010 "Direct link to Folder Filenames Not Timestamps (cal-io-010)") > 🔍 Failed to interpret any filenames in any folders as timestamps. > > 🔧 Observation filenames should be formatted in nanoseconds: > > * ❌ camera\_5.png > * ❌ 2022-01-26T12:02:33.906000000.png > * ✅ 1643230553906000000.png #### Deserialize Failure (cal-io-011)[​](#cal-io-011 "Direct link to Deserialize Failure (cal-io-011)") > 🔍 Failed to serialize a struct when reading in a file. > > 🔧 If this is a JSON file, make sure that you're not missing any commas. MetriCal can accept JSON with comments, so don't feel the need to modify your JSON heavily. #### Serialize Failure (cal-io-012)[​](#cal-io-012 "Direct link to Serialize Failure (cal-io-012)") > 🔍 Failed to serialize data as JSON. > > 🔧 Check your JSON structure for any inconsistencies or errors. #### Msgpack Failure (cal-io-013)[​](#cal-io-013 "Direct link to Msgpack Failure (cal-io-013)") > 🔍 Failed to serialize tabular calibration as message pack. > > 🔧 This may indicate an issue with the calibration data format. #### Write Failure (cal-io-014)[​](#cal-io-014 "Direct link to Write Failure (cal-io-014)") > 🔍 Failed to write file at path. > > 🔧 This error can often occur due to permissions errors in the path being written. Double check that your user has permission (and space left on the device) to write the file. #### Cached Detections For Wrong Plex (cal-io-015)[​](#cal-io-015 "Direct link to Cached Detections For Wrong Plex (cal-io-015)") > 🔍 The cached detections read in for this dataset did not match the component UUIDs in this plex. > > 🔧 This occurs when a plex has been changed or modified after detections have been cached for a particular data set. You can either verify that the UUIDs match the previous plex, or remove the cached detections and run MetriCal on the data anew. #### URDF Conversion Error (cal-io-016)[​](#cal-io-016 "Direct link to URDF Conversion Error (cal-io-016)") > 🔍 Failed to convert Plex to URDF. > > 🔧 Check your plex configuration for compatibility with URDF format requirements. #### Metadata Error (cal-io-017)[​](#cal-io-017 "Direct link to Metadata Error (cal-io-017)") > 🔍 Failed to read metadata for file at a given path. > > 🔧 This error can occur for a number of reasons: either because the path is wrong, the inode for the file has been removed while querying for it, or because of some other block-device level fault in the system. ### Code 2: CLI Errors[​](#code-2-cli-errors "Direct link to Code 2: CLI Errors") #### Missing Component (cal-cli-001)[​](#cal-cli-001 "Direct link to Missing Component (cal-cli-001)") > 🔍 There is no component with the provided name or UUID in the plex. > > 🔧 Using the `init` command to create a plex ensures that all components are properly named and assigned a UUID according to the dataset being processed. #### Duplicate Component Specification (cal-cli-002)[​](#cal-cli-002 "Direct link to Duplicate Component Specification (cal-cli-002)") > 🔍 The same component has been specified more than once. > > 🔧 Comparing a component to itself just gives identity. This is good news: The rules of the universe are still intact. #### Ambiguous Topic Mappings (cal-cli-003)[​](#cal-cli-003 "Direct link to Ambiguous Topic Mappings (cal-cli-003)") > 🔍 There are ambiguous topic-to-component mappings. > > 🔧 If you're using a glob matching pattern (i.e. `-m *image*:___`), make sure that there aren't glob patterns between assignments that could be interpreted as a duplicate. #### Invalid Key-Value Pair (cal-cli-004)[​](#cal-cli-004 "Direct link to Invalid Key-Value Pair (cal-cli-004)") > 🔍 Invalid key:value pair provided as an argument. > > 🔧 Format this argument as `key:value`. #### Invalid Coordinate Basis (cal-cli-005)[​](#cal-cli-005 "Direct link to Invalid Coordinate Basis (cal-cli-005)") > 🔍 Failed to recognize coordinate basis. Bases must be orthonormal. > > 🔧 Valid bases contain the following short descriptors for a given orthonormal format: > > * `L` for "Left" > * `R` for "Right" > * `U` for "Up" > * `D` for "Down" > * `F` for "Forward" > * `B` for "Backwards" > > These are combined in 3-character combinations to produce a "basis" that MetriCal can use, mapping X, Y, and Z axes to each direction respectively. An example is to map positive X in the right direction, positive Y in the down direction, and positive Z in the forward direction. This forms a right-handed orthonormal basis, which we refer to as `"RDF"`. #### Unrecognized Model (cal-cli-006)[​](#cal-cli-006 "Direct link to Unrecognized Model (cal-cli-006)") > 🔍 The provided model is not recognized by MetriCal. > > 🔧 MetriCal maps models to a given topic name dependent on the type of component that the topic's data represents. See the [init documentation](/metrical/commands/init.md#available-models) for more information. #### Component Name Already Exists (cal-cli-007)[​](#cal-cli-007 "Direct link to Component Name Already Exists (cal-cli-007)") > 🔍 Cannot assign component name to a topic; a component already exists in the plex with this name. > > 🔧 Ensure you're not trying to map a single topic to multiple components in the plex. #### File Would Be Overwritten (cal-cli-008)[​](#cal-cli-008 "Direct link to File Would Be Overwritten (cal-cli-008)") > 🔍 The file at the provided path will be overwritten, but the `--overwrite` (short: `-y`) flag was not provided. > > 🔧 Deleting the file at the path can remove this error; however, this may result in data loss. If this file is a plex or object space, this can cause some associations to become lost, like component relations in cached detections. > > If you would like to instead preserve the UUID relations and overwrite the previous plex, pass the corresponding `--overwrite` flag (`-y`) for the specified command. ### Code 3: Shape Errors[​](#code-3-shape-errors "Direct link to Code 3: Shape Errors") #### No Valid Tabular Components (cal-shape-001)[​](#cal-shape-001 "Direct link to No Valid Tabular Components (cal-shape-001)") > 🔍 None of the components passed to the Shape::tabular command exist, so there was nothing to do. > > 🔧 Ensure that you passed in the correct plex, and double-check that the component UUIDs and names that you passed into the shape command do not have a typo. #### Component Cannot Make LUT (cal-shape-002)[​](#cal-shape-002 "Direct link to Component Cannot Make LUT (cal-shape-002)") > 🔍 The specified component is not a camera, so MetriCal can't produce LUTs. > > 🔧 Look Up Tables (LUTs) are one way to accelerate the computation of distortion and rectification for the purposes of image projection and un-projection. Non-Camera components do not have an equivalent set of distortion parameters that can be used to generate a LUT. Make sure to double-check that the component you specified is a camera. #### Bad Stereo Configuration (cal-shape-003)[​](#cal-shape-003 "Direct link to Bad Stereo Configuration (cal-shape-003)") > 🔍 Specified components can't be a stereo pair, since there is no spatial constraint between them in the provided plex. > > 🔧 In order for a stereo pair to be produced via the shape subcommand, both components need to have a spatial constraint between them. Make sure that you have run the calibrate command over the plex and that there exists some chain of spatial constraints between the two components. You can reference a component either by its name or UUID. ### Code 4: Init Errors[​](#code-4-init-errors "Direct link to Code 4: Init Errors") #### Cannot Create Camera (cal-init-001)[​](#cal-init-001 "Direct link to Cannot Create Camera (cal-init-001)") > 🔍 Could not create a camera from the provided data for the specified component. > > 🔧 Verify that the data format matches what's expected for camera components and that all required information is present in the dataset. #### Cannot Create Lidar (cal-init-002)[​](#cal-init-002 "Direct link to Cannot Create Lidar (cal-init-002)") > 🔍 Could not create a lidar from the provided data for the specified component. > > 🔧 Verify that the data format matches what's expected for lidar components and that all required information is present in the dataset. #### Cannot Create IMU (cal-init-003)[​](#cal-init-003 "Direct link to Cannot Create IMU (cal-init-003)") > 🔍 Could not create an IMU from the provided data for the specified component. > > 🔧 Verify that the data format matches what's expected for IMU components and that all required information is present in the dataset. #### Impossible Image Resolution (cal-init-004)[​](#cal-init-004 "Direct link to Impossible Image Resolution (cal-init-004)") > 🔍 Last read image for topic has a width or height of 0 px. > > 🔧 Make sure that your data has been written to the specified data format correctly and with the correct size and / or resolution. #### Inconsistent Image Resolution (cal-init-005)[​](#cal-init-005 "Direct link to Inconsistent Image Resolution (cal-init-005)") > 🔍 Last read image for topic has a resolution inconsistent with previous images. > > 🔧 Make sure that your data has been written to the specified data format correctly and with the correct size and / or resolution. #### No Topics Found (cal-init-006)[​](#cal-init-006 "Direct link to No Topics Found (cal-init-006)") > 🔍 None of the requested topics were found or readable in this dataset. > > 🔧 Check that the dataset contains the topics requested and that they are in a readable format. #### Plex Already Exists (cal-init-007)[​](#cal-init-007 "Direct link to Plex Already Exists (cal-init-007)") > 🔍 Previous Plex already exists at path. > > 🔧 Deleting this previous plex will trigger a fresh run of Init mode, but will also reset your UUIDs for each topic. This can cause some associations to become lost, like component relations in cached detections. If you would like to instead preserve the UUID relations and overwrite the previous plex, pass the `--overwrite` flag (`-y`) to Init mode. #### No Data To Initialize (cal-init-008)[​](#cal-init-008 "Direct link to No Data To Initialize (cal-init-008)") > 🔍 MetriCal was not able to create a Plex from the init data. > > 🔧 This error could be for a few reasons: > > * None of the topics requested exist in the dataset provided. > * The dataset provided is empty. > * The requested topics in the dataset provided are not in a format that MetriCal can read. > > If any of these apply, please check the dataset and try again. ### Code 6: License Errors[​](#code-6-license-errors "Direct link to Code 6: License Errors") #### Response Verification Failed (license-001)[​](#license-001 "Direct link to Response Verification Failed (license-001)") > 🔍 Verifying the license via API failed. > > 🔧 Ensure no network proxy is modifying HTTP content or headers. #### License File Verification Failed (license-002)[​](#license-002 "Direct link to License File Verification Failed (license-002)") > 🔍 Verifying the license-cache file failed. > > 🔧 Ensure a valid license-cache file exists and has not been edited or corrupted. See the [licensing setup instructions](/metrical/configuration/license_usage.md#using-a-license-key-offline) to enable offline licensing. #### Could Not Read License File (license-004)[​](#license-004 "Direct link to Could Not Read License File (license-004)") > 🔍 A license-cache file could not be read from disk. > > 🔧 Ensure a license-cache file exists and is readable. See the [licensing setup instructions](/metrical/configuration/license_usage.md#using-a-license-key-offline) to enable offline licensing. #### Could Not Deserialize License File (license-005)[​](#license-005 "Direct link to Could Not Deserialize License File (license-005)") > 🔍 The license-cache file could not be parsed. > > 🔧 Ensure a valid license-cache file exists and has not been edited or corrupted. See the [licensing setup instructions](/metrical/configuration/license_usage.md#using-a-license-key-offline) to enable offline licensing. #### No Default License File Path (license-006)[​](#license-006 "Direct link to No Default License File Path (license-006)") > 🔍 No default application config path could be found in order to locate or write the license-cache file. > > 🔧 This is likely an issue with Docker configuration causing the HOME environment variable to be unset inside the container. Please review and follow the [licensing setup instructions](/metrical/configuration/license_usage.md#using-a-license-key-offline) to enable offline licensing. If the issue persists, contact . #### Could Not Write License File (license-007)[​](#license-007 "Direct link to Could Not Write License File (license-007)") > 🔍 The license-cache file could not be written to disk. > > 🔧 Offline operation is not possible without a license-cache file. Ensure that the metrical-license-cache volume is being successfully mounted into the Docker container. Please review and follow the [licensing setup instructions](/metrical/configuration/license_usage.md#using-a-license-key-offline) to enable offline licensing. If the issue persists, contact . #### Could Not Serialize License File (license-008)[​](#license-008 "Direct link to Could Not Serialize License File (license-008)") > 🔍 The license-cache file could not be serialized to disk. > > 🔧 Offline operation is not possible without a license-cache file. Please contact us at . #### Cached License Expired (license-009)[​](#license-009 "Direct link to Cached License Expired (license-009)") > 🔍 The license-cache file is expired. > > 🔧 Run MetriCal with internet access to refresh the license. You must run a command/sub-command (other than `help`) in order to refresh the license. #### Cached License Key Mismatch (license-011)[​](#license-011 "Direct link to Cached License Key Mismatch (license-011)") > 🔍 The provided license key does not match the key in the license-cache file. > > 🔧 If you maintain different license-cache files (e.g. for different users), ensure you're using the license-cache file that matches the provided license key. Otherwise, delete your license-cache file and try again with an active internet connection to regenerate the license-cache file. #### Config Missing License (license-012)[​](#license-012 "Direct link to Config Missing License (license-012)") > 🔍 Failed to find license key in TOML config file. > > 🔧 No item named "license" found in TOML config file. Please ensure there is a top-level key named "license" in the TOML config file. Please reference the [licensing setup instructions](/metrical/configuration/license_usage.md#3-config-file) for an example of how the TOML config file should look. #### Config Malformed License (license-013)[​](#license-013 "Direct link to Config Malformed License (license-013)") > 🔍 License key in TOML config file is invalid. > > 🔧 License key found in TOML config file is malformed (i.e. it is not a string or it has a length of zero). Ensure the license key in the TOML config file is a non-empty string. Please reference the [licensing setup instructions](/metrical/configuration/license_usage.md#3-config-file) for an example of how the TOML config file should look. #### Could Not Read Config File (license-014)[​](#license-014 "Direct link to Could Not Read Config File (license-014)") > 🔍 Failed to read TOML config file. > > 🔧 Ensure the file exists and is readable, or provide the license key via an alternative source (CLI argument, environment variable). #### License Key Not Found (license-015)[​](#license-015 "Direct link to License Key Not Found (license-015)") > 🔍 No license key was found from any source. > > 🔧 Provide a license key via CLI argument, environment variable, or config file. Please reference the [licensing setup instructions](/metrical/configuration/license_usage.md) for examples of how to use a license key with MetriCal. #### No Default Config File Path (license-016)[​](#license-016 "Direct link to No Default Config File Path (license-016)") > 🔍 No default application config path could be found in order to locate the config file. > > 🔧 This is likely an issue with Docker configuration causing the HOME environment variable to be unset inside the container. Please see the [licensing setup instructions](/metrical/configuration/license_usage.md#using-a-config-file-in-docker) for an example of docker commands and options to use. If the issue persists, contact . #### Config Malformed TOML (license-017)[​](#license-017 "Direct link to Config Malformed TOML (license-017)") > 🔍 Config file is not valid TOML. > > 🔧 Ensure the config file is valid TOML. Please reference the [licensing setup instructions](/metrical/configuration/license_usage.md#3-config-file) for an example of how the TOML config file should look. #### License API Error (license-018)[​](#license-018 "Direct link to License API Error (license-018)") > 🔍 An error occurred while querying the Hub API for license information. > > 🔧 This could be due to network issues, API changes, or server-side problems. Please check your network connection and ensure you're using the latest version of MetriCal. If the issue persists, contact with the specific error message. ### Code 8: Calibrate Errors[​](#code-8-calibrate-errors "Direct link to Code 8: Calibrate Errors") #### No Features Detected (cal-calibrate-001)[​](#cal-calibrate-001 "Direct link to No Features Detected (cal-calibrate-001)") > 🔍 No features were detected from any specified object space, or the feature count per observation was too low for use. > > 🔧 This indicates an issue in your object space. Double check: > > * The measurements of your fiducials. These should be in meters. > * The dictionary used for your boards, if applicable. For example, a 4x4 dictionary has targets made up of 4x4 squares in the inner pattern of the fiducial. > * If this is a Markerboard, make sure that you have checked that your `initial_corner` is set to the correct variant ('Marker' or 'Square'). > * Make sure you can see as much of the board as possible when collecting your data. Tangram filters detections that are all colinear, or detections which have fewer than 8 identified points in the image. > * Try adjusting your Camera or LiDAR filter settings to ensure that all detections are not filtered out. #### Detector Error (cal-calibrate-002)[​](#cal-calibrate-002 "Direct link to Detector Error (cal-calibrate-002)") > 🔍 There was an error when working with the detector. > > 🔧 This error is often unrecoverable, or a result of a misconfigured object-space file. Contact if you need assistance in crafting the correct object-space file, or making modifications to an existing one. #### Detector-Descriptor Error (cal-calibrate-003)[​](#cal-calibrate-003 "Direct link to Detector-Descriptor Error (cal-calibrate-003)") > 🔍 There was an error when working with the detector-descriptor. > > 🔧 This error is often unrecoverable, or a result of a misconfigured object-space file. Contact if you need assistance in crafting the correct object-space file, or making modifications to an existing one. #### Calibration Solver Error (cal-calibrate-004)[​](#cal-calibrate-004 "Direct link to Calibration Solver Error (cal-calibrate-004)") > 🔍 Failed to run calibration to completion. > > 🔧 There was an error in the calibration solver. This error is unrecoverable, and should not occur outside of beta or release-candidate (rc) builds of MetriCal. Contact if this error can be repeatably triggered or continues to persist. #### Calibration Interpretation Failure (cal-calibrate-005)[​](#cal-calibrate-005 "Direct link to Calibration Interpretation Failure (cal-calibrate-005)") > 🔍 Failed to interpret the calibration solution. > > 🔧 There was an error in the calibration solver. This error is unrecoverable, and should not occur outside of beta or release-candidate (rc) builds of MetriCal. Contact if this error can be repeatably triggered or continues to persist. #### Too Many Component Observations Filtered (cal-calibrate-006)[​](#cal-calibrate-006 "Direct link to Too Many Component Observations Filtered (cal-calibrate-006)") > 🔍 Too many observations from the plex were filtered out by the motion filter. > > 🔧 There was too much motion in this dataset for this component. All observations were filtered out. The motion filter will get rid of observations with detection changes over a certain metric threshold. This ensures that only data without motion artifacts gets used in a calibration. > > * Before disabling the motion filter entirely (`--disable-motion-filter`), we recommend modifying your data capture process to incorporate pauses in motion every second or two. > * When calibrating IMU (which requires motion), raise the camera's motion filter threshold. #### No Compatible Component Type to Register Against (cal-calibrate-007)[​](#cal-calibrate-007 "Direct link to No Compatible Component Type to Register Against (cal-calibrate-007)") > 🔍 There is no compatible component type to register against, so the calibration can't proceed. > > 🔧 Different component types rely on different modalities to support their calibration. > > * Cameras: no dependency > * Lidar: requires another lidar or a camera > * IMU: requires a camera > * Local Navigation System (LNS): requires a camera > > If processing any of these modalities, make sure these conditions are met before proceeding. Alternatively, remove this component from the calibration process. #### Compatible Component Type Has No Detections (cal-calibrate-008)[​](#cal-calibrate-008 "Direct link to Compatible Component Type Has No Detections (cal-calibrate-008)") > 🔍 This data had a compabile component type to register against, but that component type had no detections. > > 🔧 Different component types rely on different modalities to support their calibration. > > * Cameras: no dependency > * Lidar: requires another lidar or a camera > * IMU: requires a camera > * Local Navigation System (LNS): requires a camera > > One or more components could be paired with a compatible component type, but that other component type didn't have any usable detections! Gather more detections to calibrate this component. #### Compatible Component Type Has All Detections Filtered (cal-calibrate-009)[​](#cal-calibrate-009 "Direct link to Compatible Component Type Has All Detections Filtered (cal-calibrate-009)") > 🔍 This data had a compatible component type to register against, but all observations from that component type were filtered out from motion. > > 🔧 Different component types rely on different modalities to support their calibration. > > * Cameras: no dependency > * Lidar: requires another lidar or a camera > * IMU: requires a camera > * Local Navigation System (LNS): requires a camera > > One or more components could be paired with a compatible component type, but that other component type had all of its detections filtered out by the motion filter! We recommend taking another dataset, this time with pauses during motion. That, or raise the motion filter threshold. #### Camera Pose Estimate Optimization Failed (cal-calibrate-010)[​](#cal-calibrate-010 "Direct link to Camera Pose Estimate Optimization Failed (cal-calibrate-010)") > 🔍 The initial camera pose optimization for a component failed to converge. > > 🔧 According to the optimization, there were camera detections for which a sane camera pose could not be derived. This may indicate an error in image readout, or an egregiously poor intrinsics initialization by MetriCal. Try two things: > > * Increase this camera's focal length in the init plex. This might just do the trick. > * Review this camera's data for anything unusual. Any corrupt images or bad detections could be causing this issue. #### Could Not Report Successful Calibration (cal-calibrate-011)[​](#cal-calibrate-011 "Direct link to Could Not Report Successful Calibration (cal-calibrate-011)") > 🔍 MetriCal was unable to report a successful calibration to the licensing server. > > 🔧 Because your license tier has metered calibrations, MetriCal needs to contact the license server to register each successful calibration. Even though your calibration succeeded, we were unable to reach the license server at this time and therefore cannot output calibration results. ### Code 10: Rendering Errors[​](#code-10-rendering-errors "Direct link to Code 10: Rendering Errors") #### Missing Recording Stream (cal-rendering-001)[​](#cal-rendering-001 "Direct link to Missing Recording Stream (cal-rendering-001)") > 🔍 No global Rerun connection found. > > 🔧 Ensure that a global Rerun connection is established before calling this function by calling `set_global_recording_stream`. #### Log Failure (cal-rendering-002)[​](#cal-rendering-002 "Direct link to Log Failure (cal-rendering-002)") > 🔍 Could not log to Rerun. > > 🔧 This indicates an issue with the Rerun logging system. Check that Rerun is properly initialized and that the connection hasn't been interrupted. #### Observation Incompatible (cal-rendering-003)[​](#cal-rendering-003 "Direct link to Observation Incompatible (cal-rendering-003)") > 🔍 The observation wasn't compatible with the render function called. > > 🔧 Check that you're using the correct render function for the type of observation you're working with. Each render function expects a specific observation type. #### Missing Intrinsics (cal-rendering-004)[​](#cal-rendering-004 "Direct link to Missing Intrinsics (cal-rendering-004)") > 🔍 There were no intrinsics assigned to the image to be rendered. > > 🔧 All images should be pre-corrected before passing them to the render function. Make sure you've assigned the necessary intrinsic parameters to the image. #### Empty Object Space (cal-rendering-005)[​](#cal-rendering-005 "Direct link to Empty Object Space (cal-rendering-005)") > 🔍 This object space is empty of renderable objects. > > 🔧 The object space you're trying to render doesn't contain any objects that can be displayed. Check your object space configuration to ensure it contains valid renderable elements. #### Missing Component (cal-rendering-006)[​](#cal-rendering-006 "Direct link to Missing Component (cal-rendering-006)") > 🔍 There is no component with the specified name in the plex. > > 🔧 Verify that the component name you're trying to render exists in your plex configuration. Check for typos or make sure the component has been properly initialized in the plex. #### Recording Stream Error (cal-rendering-007)[​](#cal-rendering-007 "Direct link to Recording Stream Error (cal-rendering-007)") > 🔍 An error occurred in the Rerun recording stream. > > 🔧 This indicates an underlying issue with the Rerun system. Check that Rerun is properly installed, running, and that your MetriCal Docker container has access to it. #### Image Conversion Error (cal-rendering-008)[​](#cal-rendering-008 "Direct link to Image Conversion Error (cal-rendering-008)") > 🔍 An error occurred during image conversion for rendering. > > 🔧 There was a problem converting or processing an image for rendering in Rerun. Check that your image formats are supported and that the image data is valid. #### Object Space Construction Error (cal-rendering-009)[​](#cal-rendering-009 "Direct link to Object Space Construction Error (cal-rendering-009)") > 🔍 There was an error constructing the object space for rendering. > > 🔧 This error indicates a problem with the object space definition when preparing it for visualization. Check your object space configuration for any issues with the geometry or structure. ### Code 12: Diagnostic Errors[​](#code-12-diagnostic-errors "Direct link to Code 12: Diagnostic Errors") 🔍 Error-level data diagnostic detected; calibration failed. 🔧 Since an Error-level data diagnostic was thrown, MetriCal will not output a calibration results file by default. If your license plan includes a limited number of calibrations, runs that do not produce a calibration results file will not count toward that limit. This safeguard can be overridden by passing the [`--override-diagnostics`](/metrical/commands/calibrate.md#--override-diagnostics) flag to the `calibrate` command. This will allow you to get a calibration if you feel your data has been wrongly diagnosed as insufficient. However, this will count toward your calibration limit. --- # MetriCal Commands MetriCal is a powerful program with many different tools to help you calibrate your system. Each one is designed for a specific purpose, and can be used in conjunction with other commands to achieve your desired calibration results. | Command | Description | | ---------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | | [Init](/metrical/commands/init.md) | Profile a dataset for calibration. Create or modify a Plex based on this data | | [Calibrate](/metrical/commands/calibrate.md) | Calibrate a dataset given an [Init](/metrical/commands/init.md) plex and object space configuration. | | [Report](/metrical/commands/report.md) | Generate the report from a plex of the output file of a calibration. | | [Display](/metrical/commands/display.md) | Apply the calibrated plex to a dataset and render the results | | [Shape](/metrical/commands/shape/shape_overview.md) | Modify an input plex into any number of different helpful output formats | | [Completion](/metrical/commands/completion.md) | Generate autocomplete files for common shell variants | | [Consolidate Object Spaces](/metrical/commands/consolidate_object_spaces.md) | Consolidate compatible markers and dot markers into a single object space | | [Pipeline](/metrical/commands/pipeline.md) | Create a pipeline of MetriCal commands in JSON (rather than shell scripting) | ## Universal Options[​](#universal-options "Direct link to Universal Options") Most commands in MetriCal have their own options and settings. However, there are a few common options that are applicable to all commands. These options can be placed in any command, either before or after the command you'd like to use. #### --license \[LICENSE][​](#--license-license "Direct link to --license \[LICENSE]") > The license key for MetriCal. > > There are three ways to pass in a license key: > > * Set it via the command line argument `--license` > * Set the `TANGRAM_VISION_LICENSE` environment variable > * Locate it in the Tangram config TOML, typically located at `$HOME/.config/tangram-vision/config.toml` > > License keys are checked and validated in this order. #### --report-path \[REPORT\_PATH][​](#--report-path-report_path "Direct link to --report-path \[REPORT_PATH]") > MetriCal can actually write the TUI output of any command to an HTML file. The path to save the TUI output to. you can also just redirect stderr, though this will subsume the interactive output #### -V, --version[​](#-v---version "Direct link to -V, --version") > Print the current version of MetriCal #### -h, --help[​](#-h---help "Direct link to -h, --help") > Print the help message for this command or function. -h prints a summary of the documentation presented by --help. #### -vv, -v, -q, -qq, -qqq[​](#-vv--v--q--qq--qqq "Direct link to -vv, -v, -q, -qq, -qqq") > The logging verbosity for MetriCal. > > MetriCal uses the [log](https://docs.rs/log/latest/log/) crate to produce logs at various levels of priority. Users can set the log level by passing verbosity flags to any MetriCal command: > > * `-vv`: Trace > * `-v`: Debug > * default: Info > * `-q`: Warn > * `-qq`: Error > * `-qqq`: Off > > Some outputs will change slightly depending on the verbosity level to avoid spamming your console. #### -z, --topic-to-observation-basis \[topic\_name:observation\_basis][​](#-z---topic-to-observation-basis-topic_name "Direct link to -z---topic-to-observation-basis-topic_name") > A mapping of topic/folder names to their respective observation coordinate bases. > > MetriCal natively computes extrinsics as a transformation between observations. This can sometimes lead to confusion, as extrinsics won't "match" your preferred convention or coordinate system (even though they're valid!). > > Providing an observation coordinate basis for a topic will allow MetriCal to modify transformations between components to match the conventions found in your system. > > Example: The topic "lidar\_1" streams data in FLU \[x: forward, y: left, > z: up], while the topic "lidar\_2" uses FRD. Designate these components as such: > > ``` > -z lidar_1:FLU -z lidar_2:FRD > ``` > > One may also use wildcards to designate many topics at a time with the same observation basis: > > ``` > -z /lidar/*:FLU > ``` #### -Z, --topic-to-component-basis \[topic\_name:component\_basis][​](#-z---topic-to-component-basis-topic_name "Direct link to -z---topic-to-component-basis-topic_name") > A mapping of topic/folder names to their respective transform coordinate bases. > > MetriCal natively extrinsics as a transformation between observations. This can sometimes lead to confusion, as extrinsics won't "match" your preferred convention or coordinate system (even though they're valid!) > > Providing a transform coordinate basis for a topic will allow MetriCal to modify transformations between components to match the conventions found in your system > > Example: The topic "lidar\_1" streams pointcloud data in FLU \[x: > forward, y: left, z: up], while the topic "lidar\_2" uses FRD. However, we desire coordinate bases to be FRD for both lidar. Designate these components as such > > ``` > -z lidar_1:FLU -z lidar_2:FRD -Z *lidar*:FRD > ``` --- # Completion Mode ## Purpose[​](#purpose "Direct link to Purpose") * Generate handy completions for common shell varieties (bash, fish, elvish, powershell, zsh). ## Usage[​](#usage "Direct link to Usage") ``` metrical completion [OPTIONS] --shell ``` ## Concepts[​](#concepts "Direct link to Concepts") This command is purely for supplementing your MetriCal experience with automatic completions from the terminal. Since it's common to invoke MetriCal using a bash function or alias, use the `--invocation` argument to specify the string with which you'll invoke MetriCal to get the right completions. ### Sourcing the Completion File[​](#sourcing-the-completion-file "Direct link to Sourcing the Completion File") Don't forget to `source` the appropriate completion file before attempting to use completion! For a temporary solution, just run `source`: ``` source ``` Otherwise, save the file in a canonical location to automatically load it during any shell session. Some locations are listed below; note that this list is not exhaustive, and may change depending on your configuration. | Shell | Location (example for `metrical` alias) | | ---------- | ------------------------------------------------------ | | bash | `/usr/share/bash-completion/completions/metrical.bash` | | zsh | `~/.zsh/_metrical` | | fish | `~/.config/fish/completions/metrical.fish` | | elvish | `~/.elvish/metrical.elv` | | powershell | execute `_metrical.ps1` script in PowerShell | ## Examples[​](#examples "Direct link to Examples") #### Generate Completions for Bash[​](#generate-completions-for-bash "Direct link to Generate Completions for Bash") > ``` > metrical completion -i metrical -s bash > $OUTPUT > ``` > > ...where `$OUTPUT` is the file to pipe the output into. ## Options[​](#options "Direct link to Options") #### -i, --invocation \[INVOCATION][​](#-i---invocation-invocation "Direct link to -i, --invocation \[INVOCATION]") > The binary name being invoked, e.g. `metrical`. The default value is the 0th argument of the current invocation. This is useful if you're using an alias or function to invoke MetriCal. #### -s, --shell \[SHELL][​](#-s---shell-shell "Direct link to -s, --shell \[SHELL]") > The shell variant to generate. Valid values: > > * `bash`: [bash](https://www.gnu.org/software/bash/manual/bash.html) completion file. > * `zsh`: [zsh](https://www.zsh.org/) completion file. > * `fish`: [Fish](https://fishshell.com/docs/current/index.html) completion file. > * `elvish`: [Elvish](https://elv.sh/ref/) completion file. > * `powershell`: [Powershell](https://learn.microsoft.com/en-us/powershell/) completion file. --- # Consolidate Object Spaces Mode ## Purpose[​](#purpose "Direct link to Purpose") * Combine multiple object spaces from a calibration result into a single, unified object space. * Process object space transformations using object relative extrinsics (OREs) to create a consolidated target configuration. Dot Markers and Square Markers Only This command only works with [dot markers](/metrical/targets/target_overview.md#dotmarkers) and [square markers](/metrical/targets/target_overview.md#squaremarkers). ## Usage[​](#usage "Direct link to Usage") ``` metrical consolidate-object-spaces [OPTIONS] ``` ## Concepts[​](#concepts "Direct link to Concepts") The Consolidate Object Spaces command takes a results file or an object space file that contains object relative extrinsics (OREs) and uses those transformations to combine all object spaces into a single, unified object space. This command is particularly useful for calibration scenarios with minimal sensor or target overlap. In such cases, users would typically: 1. Run an initial "survey" calibration to determine object relative extrinsics 2. Use this command to consolidate the results into a single object space 3. Run a full calibration using the consolidated object space 4. Use this reference in subsequent calibrations to ensure consistency By consolidating object spaces, you can effectively combine calibration data from multiple scenes or captures, even when there isn't significant overlap between all sensors. ## Examples[​](#examples "Direct link to Examples") #### Consolidate object spaces from a calibration result[​](#consolidate-object-spaces-from-a-calibration-result "Direct link to Consolidate object spaces from a calibration result") > ``` > metrical consolidate-object-spaces \ > --output-path consolidated.json \ > results.json > ``` #### Use the consolidated object space in a subsequent calibration[​](#use-the-consolidated-object-space-in-a-subsequent-calibration "Direct link to Use the consolidated object space in a subsequent calibration") > After creating a consolidated object space, you can use it in your calibration: > > ``` > metrical calibrate \ > --output-json final_results.json \ > $DATA $PLEX consolidated.json > ``` #### Force overwrite of an existing consolidated object space file[​](#force-overwrite-of-an-existing-consolidated-object-space-file "Direct link to Force overwrite of an existing consolidated object space file") > ``` > metrical consolidate-object-spaces \ > --overwrite-consolidated-objects \ > --output-path consolidated.json \ > results.json > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[OBJECT\_SPACE\_OR\_RESULTS\_PATH][​](#object_space_or_results_path "Direct link to \[OBJECT_SPACE_OR_RESULTS_PATH]") > A path pointing to a description of the object space or a MetriCal results JSON file. For this command to work effectively, the input file should contain object relative extrinsics that relate multiple object spaces together. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every command, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -o, --consolidated-object-space-path \[CONSOLIDATED\_OBJECT\_SPACE\_PATH][​](#-o---consolidated-object-space-path-consolidated_object_space_path "Direct link to -o, --consolidated-object-space-path \[CONSOLIDATED_OBJECT_SPACE_PATH]") > **Default: consolidated\_objects.json** > > The output path to save the consolidated object space, in JSON format. If a directory is provided, the file will be created as "consolidated\_objects.json" within that directory. #### -y, --overwrite-consolidated-objects[​](#-y---overwrite-consolidated-objects "Direct link to -y, --overwrite-consolidated-objects") > **Default: false** > > Overwrite an existing consolidated object space file if it exists at the specified location. If this flag is not set and the output file already exists, the operation will fail with an error. --- # Display Mode Rerun Required This is the only command that expects an active Rerun server. Make sure to have Rerun running before using this command. Read about configuring Rerun [here](/metrical/configuration/visualization.md). ## Purpose[​](#purpose "Direct link to Purpose") * Display the applied results of a calibration in Rerun. ## Usage[​](#usage "Direct link to Usage") ``` metrical display [OPTIONS] $INPUT_DATA_PATH $PLEX_OR_RESULTS_PATH ``` ## Concepts[​](#concepts "Direct link to Concepts") MetriCal's Display command takes the results of a calibration, applies them to a dataset, and displays the applied data in Rerun. This is useful for a quick gut check of the calibration quality; the classic "ocular validation", if you will. That being said, there's no reason you can't apply a calibration to a test dataset, i.e. one that wasn't used to derive a calibration. Just make sure that the dataset shares the same topics as the plex. If this is not the case, you can use the [`--topic-to-component`](#-m---topic-to-component-topic-to-component) flag to map topics to components. Watch the video below for a quick primer on Display command: ## Examples[​](#examples "Direct link to Examples") #### Start a Display session immediately after a calibration run[​](#start-a-display-session-immediately-after-a-calibration-run "Direct link to Start a Display session immediately after a calibration run") > ``` > metrical calibrate -o $RESULTS $DATA $PLEX $OBJ_SPC > metrical display $DATA $RESULTS > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_DATA\_PATH][​](#input_data_path "Direct link to \[INPUT_DATA_PATH]") > Input data path for this calibration. > > MetriCal accepts a few data formats: > > * Ros1 bags, in the form of a `.bag` file. > * MCAP files, with an `.mcap` extension. > * Folders, as a flat directory of data. The folder passed to MetriCal should itself hold one folder of data for every component. > > In all cases, the topic/folder name must match a named component in the plex in order to be matched correctly. If this is not the case, there's no need to edit the plex directly; instead, one may use the [`--topic-to-component`](#-m---topic-to-component-topic-to-component) flag. #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every command, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -m, --topic-to-component \[TOPIC-TO-COMPONENT][​](#-m---topic-to-component-topic-to-component "Direct link to -m, --topic-to-component \[TOPIC-TO-COMPONENT]") > A mapping of ROS topic/folder names to component names/UUIDs in the input plex. > MetriCal only parses data that has a topic-component mapping. Ideally, topics and components share the same name. However, if this is not the case, use this flag to map topic names from the dataset to component names in the plex. #### --render-socket \[RENDER\_SOCKET][​](#--render-socket-render_socket "Direct link to --render-socket \[RENDER_SOCKET]") > The web socket address on which Rerun is listening. This should be an IP address and port number separated by a colon, e.g. `--render-socket="127.0.0.1:3030"`. By default, Rerun will listen on socket `host.docker.internal:9876`. If running locally (not via Docker), Rerun's default port is `127.0.0.1:9876` > > When running Rerun from its CLI, the IP would correspond to its `--bind` option and the port would correspond to its `--port` option. --- # Init Mode ## Purpose[​](#purpose "Direct link to Purpose") * Create an uncalibrated plex from a dataset and other seed plex. ## Usage[​](#usage "Direct link to Usage") ``` metrical init [OPTIONS] --topic-to-model \ ``` ## Concepts[​](#concepts "Direct link to Concepts") Init will infer all components, spatial constraints, and temporal constraints based on the observations and interactions in the dataset. It will then write this information out in plex form to a JSON file (listed as `$INIT_PLEX` in code samples below). This is our Initial Plex! MetriCal will use this plex as the initialization to the [Calibrate](/metrical/commands/calibrate.md) command. Let's break down this example Init command: ``` metrical init -m *cam_ir*:opencv_radtan -m *cam_color*:eucm -m *lidar*:lidar $DATA $INIT_PLEX ``` * Assign all topics/folders that match the `*cam_ir*` pattern to the OpenCV Radtan model. * Assign all topics/folders that match the `*cam_color*` pattern to the EUCM model. * Assign all topics/folders that match the `*lidar*` pattern to the No Offset model (just extrinsics) * Use the data in `$DATA` to seed the plex. MetriCal will discard information from any topics that aren't assigned a model using the command line. See all of the models available to MetriCal in the [`--topic-to-model`/`-m` documentation](#available-models). ### Seed an Init Plex[​](#seed-an-init-plex "Direct link to Seed an Init Plex") Say you've already calibrated all of your cameras individually, and have great intrinsics results in another plex. You can use those as a starting point for another plex using Init: ``` metrical init --seed-cal-path $SEED_PLEX $DATA $INIT_PLEX ``` All components, spatial constraints, and temporal constraints that are in the initial plex will be preserved in the output plex, regardless of whether they are in the dataset. If there is a conflict, the output will be modified to a "best guess" value by MetriCal based on the input plex. An example of this would be changing the image size of a camera based on the observations in the dataset, but keeping any existing spatial constraints intact. ### Seed from a URDF File[​](#seed-from-a-urdf-file "Direct link to Seed from a URDF File") In this example, we assume that you wish to seed the plex generated by [Init](/metrical/commands/init.md) with a URDF file. The URDF file might contain extrinsics from a CAD file, or may be provided by ROS from a previous calibration. ``` metrical init ... --seed-cal-path my_urdf.xml $DATA ``` OR ``` metrical init ... -p my_urdf.xml $DATA ``` This flag can be specified more than once, if you have URDF files split across several sub-graphs of components: ``` metrical init ... -p urdf_1.xml -p urdf_2.xml $DATA ``` This can even be mixed with other seed sources: ``` metrical init ... -p urdf_1.xml -p previous_plex.json -p urdf_2.xml $DATA ``` Spatial constraints in the output plex are added from the provided seed URDF if the following conditions are met: 1. The names of the topics in the URDF match one of the components mapped by the `-m topic:model` mapping in the invocation of `metrical-init`. 2. There does not exist a higher-ranked spatial constraint among the other seed calibration sources. To expand on that latter point, `--seed-cal-path` can be provided a metrical results JSON, a pre-existing plex, or URDF file. Of these, `metrical-init` searches all seed calibration sources (all files passed to `--seed-cal-path`). See the section on the order of operations below for a complete description of how this is performed. ### Overwriting Init Plex[​](#overwriting-init-plex "Direct link to Overwriting Init Plex") By default, MetriCal will prevent you from overwriting a plex if one exits at the `$INIT_PLEX` path. You can tell MetriCal to overwrite this plex by passing the `--overwrite`/`-y` flag. When this happens, MetriCal will use the plex currently located at `$INIT_PLEX` as a seed itself (along with any other seeded plex you have passed in). This preserves UUIDs between runs, information upon which much of MetriCal's caching depends. If you want to overwrite the Init plex without using the original as a seed, you must manually delete the original plex file. ### Init Order of Operations[​](#init-order-of-operations "Direct link to Init Order of Operations") #### ...with no Seeded Plex[​](#with-no-seeded-plex "Direct link to ...with no Seeded Plex") * The Init command should create a new plex with the given data, only using the named devices. * All components should have dummy spatial constraints between them. * All components should have populated temporal constraints between them. #### ...with Seeded Plex[​](#with-seeded-plex "Direct link to ...with Seeded Plex") For all components passed in via `-m`, if the topic is... * IN the dataset + IN the seeded plex: * The plex component intrinsics are updated. UUIDs are preserved. * If the intrinsics model asked for in the CLI is different from that of the seeded plex, it should update the model. UUIDs are preserved. * If a camera model is the same, but the image size is different, it should update the intrinsics to scale to the new image size (if possible). UUIDs are preserved. If not possible, it should throw an error. * If the camera model is the same, and the image size is the same, we should preserve the component as-is from the seeded plex. * IN the dataset + NOT IN the seeded plex: * A new component is created with guessed intrinsics and the right model. * NOT IN the dataset + IN the seeded plex: * All information on that component is copied over as-is. * NOT IN the dataset + NOT IN the seeded plex: - The component and all of its information is disregarded. Any components not passed in with `-m`, but present in the seeded plex, are dropped from the initialized plex. All constraints that connect two preserved components should themselves be preserved. * If a component is dropped, all constraints that involve it are dropped. * Synchronization in temporal constraints is preserved. Resolution is modified based on the input data. #### ...with Multiple Seeded Plex[​](#with-multiple-seeded-plex "Direct link to ...with Multiple Seeded Plex") * Seed plex are ordered by creation timestamp, from newest to oldest * Plex repair data is created according to the "With Seeded Plex" criteria, going from newest seed plex to oldest. The newer the plex, the higher its priority; the newest plex's components and constraints will be used before all others. * If an older plex contains components or constraints that use a totally different UUID and topic, those components or constraints are carried over in their entirety in the init plex. * If an older plex contains components or constraints that use the same topic, but a different UUID, that information is considered the same as what was read in the same topic from a newer plex. In this case, any constraints that include the "old" topic+UUID pair are incorporated into the init plex, but the "old" UUID is converted to match that of the component in the newest plex in which it appeared. * If no spatial constraints were found in the previous step for a given pair of topics, then an appropriate spatial constraint is searched for in each URDF, in the order that each URDF were passed into the command line. URDFs do not contain any kind of standard creation timestamp, and so they cannot be sorted from newest to oldest by default. #### Applying the Order of Operations[​](#applying-the-order-of-operations "Direct link to Applying the Order of Operations") | Item | Seed Plex One | Seed Plex Two | Init Plex | | -------------------------- | ----------------- | ----------------- | -------------------------- | | Timestamp | 100 | 200 (newer) | Matches time of writing | | Components with topicuuid​ | A1​, B2​, and C3​ | A4​, B5​, and C6​ | A4​, B5​, and C6​ | | Spatial Constraint | A1​ <--> B2​ | A4​ <--> B5​ | A4​ <--> B5​ | | Temporal Constraint | B2​ <--> C3​ | A4​ <--> B5​ | A4​ <--> B5​, B5​ <--> C6​ | All information in the init plex came directly from Seed Plex Two, since it was newer. However, the temporal constraint between B and C only existed in Seed Plex One. Since we match topics as well as UUIDs, we took this to mean that Seed Plex Two was missing a temporal constraint; we can go ahead and move that to the new init plex. ## Examples[​](#examples "Direct link to Examples") #### Create a plex from input data[​](#create-a-plex-from-input-data "Direct link to Create a plex from input data") Use OpenCV's brown-conrady model for all topics whose name starts with the substring `/camera/`. ``` metrical init --topic-to-model /camera/*:opencv_radtan $DATA $INIT_PLEX ``` #### Seed a new plex with previous calibrations[​](#seed-a-new-plex-with-previous-calibrations "Direct link to Seed a new plex with previous calibrations") ``` metrical init --seed-cal-path $SEED_PLEX_ONE \ --seed-cal-path $SEED_PLEX_TWO \ --seed-cal-path ... \ $DATA $INIT_PLEX ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_DATA\_PATH][​](#input_data_path "Direct link to \[INPUT_DATA_PATH]") Input data path for this initialization. MetriCal accepts a few data formats: * Ros1 bags, in the form of a `.bag` file. * Ros2 bags, in the form of a `.mcap` file. * Folders of observations. In all cases, the topic/folder name must match a named component in the plex in order to be matched correctly. If this is not the case, there's no need to edit the plex directly; instead, one may use the `--topic-to-component` flag. #### \[OUTPUT\_JSON][​](#output_json "Direct link to \[OUTPUT_JSON]") The JSON in which to save the final plex. \[default: plex.json] ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every command, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -p, --seed-cal-path \[SEED\_CAL\_PATH][​](#-p---seed-cal-path-seed_cal_path "Direct link to -p, --seed-cal-path \[SEED_CAL_PATH]") > The path to the input plex. This can be a MetriCal results JSON, a plex JSON, or a URDF XML. #### -m, --topic-to-model \[topic\_name:model][​](#-m---topic-to-model-topic_name "Direct link to -m---topic-to-model-topic_name") > A mapping of topic/folder names to models in the input plex. NOTE: All topics intended for calibration *must* be enumerated by this argument. If an initial plex is provided, any matching topic models in the plex will be overwritten. > > Example: We would like the topic "camera\_1" to be modeled using OpenCV's regular distortion model (aka Brown Conrady), and "camera\_2" to be modeled using its fisheye model: > > ``` > -m camera_1:opencv_radtan -m camera_2:opencv_fisheye > ``` > > One may also use wildcards to designate many topics at a time: > > ``` > -m /camera/\*:opencv_radtan > ``` > > #### Available models[​](#available-models "Direct link to Available models") > > ##### Cameras[​](#cameras "Direct link to Cameras") > > * [no\_distortion](/metrical/calibration_models/cameras.md#no-distortion) > * [pinhole\_with\_brown\_conrady](/metrical/calibration_models/cameras.md#pinhole-with-inverse-brown-conrady) > * [pinhole\_with\_kannala\_brandt](/metrical/calibration_models/cameras.md#pinhole-with-inverse-kannala-brandt) > * [opencv\_radtan](/metrical/calibration_models/cameras.md#opencv-radtan) > * [opencv\_fisheye](/metrical/calibration_models/cameras.md#opencv-fisheye) > * [opencv\_rational](/metrical/calibration_models/cameras.md#opencv-rational) > * [eucm](/metrical/calibration_models/cameras.md#eucm) > * [double\_sphere](/metrical/calibration_models/cameras.md#double-sphere) > * [omni](/metrical/calibration_models/cameras.md#omnidirectional-omni) > * [power\_law](/metrical/calibration_models/cameras.md#power-law) > > ##### Lidar[​](#lidar "Direct link to Lidar") > > * [lidar](/metrical/calibration_models/lidar.md) > > ##### Imu[​](#imu "Direct link to Imu") > > * [scale](/metrical/calibration_models/imu.md#imu-model-descriptions) > * [scale\_shear](/metrical/calibration_models/imu.md#imu-model-descriptions) > * [scale\_shear\_rotation](/metrical/calibration_models/imu.md#imu-model-descriptions) > * [scale\_shear\_rotation\_g\_sensitivity](/metrical/calibration_models/imu.md#imu-model-descriptions) #### -T, --remap-seed-component \[REMAP\_SEED\_COMPONENT][​](#-t---remap-seed-component-remap_seed_component "Direct link to -T, --remap-seed-component \[REMAP_SEED_COMPONENT]") > Remaps a component from the seed plex to a new topic name in the given dataset. > > Can take a string of either format `old_component_name:new_topic_name` or `old_component_uuid:new_topic_name`. #### -y, --overwrite-plex[​](#-y---overwrite-plex "Direct link to -y, --overwrite-plex") > Overwrite the init plex file at this location, if it exists. --- Pipeline Mode Deprecation Pipeline Mode is deprecated and will be removed in a future release. # Pipeline Mode ## Purpose[​](#purpose "Direct link to Purpose") * Run a series of commands in sequence * Store common pipeline configurations in version control ## Usage[​](#usage "Direct link to Usage") ``` metrical pipeline ``` ## Concepts[​](#concepts "Direct link to Concepts") A pipeline is a series of commands that are run in sequence. These commands are just modes that have been serialized into JSON and stored in a file. This allows you to store common workflows in version control and run them with a single command. Here's an example of a pipeline configuration file: ``` { "commands": [ { "init": { "input_data_path": "fixtures/data_as.bag", "output_json": "/tmp/dummy_output_config.json", "plex_path": null, "preset_device": [], "topic_to_model": [ ["*cam*", "opencv_radtan"], ["*lidar*", "lidar"] ] } }, { "calibrate": { "cache_dir": "cached_data", "cache_filtered_data": false, "disable_filter": false, "disable_ore_inference": false, "input_data_path": "fixtures/data_as.bag", "is_interactive": false, "max_iterations": 200, "object_space_path": "fixtures/obj.json", "output_json": "results.json", "plex_path": "fixtures/plex.json", "preserve_input_constraints": false, "render_socket": null, "should_render": false, "stillness_threshold": 1, "topic-to-component": [] } } ], "license": null, "report_path": null } ``` As you can see, it's just the CLI arguments for each command serialized into JSON. And no, you can't have a pipeline for your pipeline; `pipeline` is the only command that isn't serialized. Sorry, recursion fans. Note that commands will be run in the order they are listed in the `commands` array. A failed command will stop the entire pipeline and return the error. ## Examples[​](#examples "Direct link to Examples") #### Run a pipeline[​](#run-a-pipeline "Direct link to Run a pipeline") > ``` > metrical pipeline my_pipeline.json > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[CONFIG][​](#config "Direct link to \[CONFIG]") > The configuration file describing the pipeline. --- # Report Mode ## Purpose[​](#purpose "Direct link to Purpose") * Generate the [report](/metrical/results/report.md) from a plex or the [output file](/metrical/results/output_file.md) of a calibration. ## Usage[​](#usage "Direct link to Usage") ``` metrical report [OPTIONS] ``` ## Concepts[​](#concepts "Direct link to Concepts") The Report command is primarily used to re-generate the [full report](/metrical/results/report.md) from a [calibration output](/metrical/results/output_file.md). It will also print any provided plex file in a human-readable format. If an `origin` and `secondary` component are provided, then MetriCal will extract the relevant component and constraint data between those two items. `Origin` will act as the origin for any constraints. This is useful for a gut-check of a calibration parameter or two. If you need to reformat a plex into something more convenient for your application, use [Shape command](/metrical/commands/shape/shape_overview.md) instead. ## Examples[​](#examples "Direct link to Examples") #### Print a plex in a human-readable format[​](#print-a-plex-in-a-human-readable-format "Direct link to Print a plex in a human-readable format") > ``` > metrical report $PLEX > ``` #### Generate a report using the output file from a calibration[​](#generate-a-report-using-the-output-file-from-a-calibration "Direct link to Generate a report using the output file from a calibration") > ``` > metrical calibrate -o $RESULTS $DATA $INIT_PLEX $OBJ > metrical report $RESULTS > ``` #### Print the constraints between two components in a human-readable format[​](#print-the-constraints-between-two-components-in-a-human-readable-format "Direct link to Print the constraints between two components in a human-readable format") > ``` > metrical report -a ir_one -b ir_two $PLEX > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every command, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -a, --origin \[ORIGIN][​](#-a---origin-origin "Direct link to -a, --origin \[ORIGIN]") > The origin component, if provided. Must have a secondary component option as well. #### -b, --secondary \[SECONDARY][​](#-b---secondary-secondary "Direct link to -b, --secondary \[SECONDARY]") > The secondary component, if provided. Must have an origin component option as well. --- # Focus ## Purpose[​](#purpose "Direct link to Purpose") * Create a plex in which all components are spatially connected to only one "focus" component, i.e. a "hub-and-spoke" plex. ## Usage[​](#usage "Direct link to Usage") ``` metrical shape focus [OPTIONS] --component ``` ## Concepts[​](#concepts "Direct link to Concepts") This Shape command creates a plex in which all components are spatially connected to only one origin component/object space ID. In other words, one component acts as a "focus", with spatial constraints to all other components. Notice that this command will compose two or more spatial constraints to achieve the optimal "path" between two components in space. ![An illustration of a plex created through Focus command](/assets/images/shape_focus-b2683baa26f761dbdd56f38f4215130c.png) ## Examples[​](#examples "Direct link to Examples") #### Create a plex focused around component `ir_one`[​](#create-a-plex-focused-around-component-ir_one "Direct link to create-a-plex-focused-around-component-ir_one") > ``` > metrical shape focus --component ir_one $RESULTS $OUTPUT_DIR > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. #### \[OUTPUT\_DIR][​](#output_dir "Direct link to \[OUTPUT_DIR]") > The output directory for this shaping operation. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every mode, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -a, --component \[COMPONENT][​](#-a---component-component "Direct link to -a, --component \[COMPONENT]") > The component around which this plex is focused. --- Code Samples For code samples of LUT-based image manipulation, please check out our [lut-examples](https://gitlab.com/tangram-vision/oss/lut-examples) repository. # Lookup Table ## Purpose[​](#purpose "Direct link to Purpose") * Create a pixel-wise lookup table for a single camera. ## Usage[​](#usage "Direct link to Usage") ``` metrical shape lut [OPTIONS] --camera ``` ## Concepts[​](#concepts "Direct link to Concepts") It's not always easy to adopt a new camera model. Sometimes, you just want to apply a correction and not have to worry about getting all of the math right. The LUT subcommand gives you a shortcut: it describes the locations of the mapped pixel values required to apply a calibration to an entire image. Note that a lookup table only describes the correction for an image of the same dimensions as the calibrated image. If you're trying to downsample or upsample an image, you'll need to derive a new lookup table for that image dimension. These lookup tables can be used as-is using OpenCV's lookup table routines; see [this open-source repo on applying lookup tables in OpenCV](https://gitlab.com/tangram-vision/oss/lut-examples) for an example. ## Examples[​](#examples "Direct link to Examples") #### Create a correction lookup table for camera `ir_one`[​](#create-a-correction-lookup-table-for-camera-ir_one "Direct link to create-a-correction-lookup-table-for-camera-ir_one") > ``` > metrical shape lut --camera ir_one > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. #### \[OUTPUT\_DIR][​](#output_dir "Direct link to \[OUTPUT_DIR]") > The output directory for this shaping operation. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every mode, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -a, --camera \[CAMERA][​](#-a---camera-camera "Direct link to -a, --camera \[CAMERA]") > The camera to generate a LUT for. Must be a camera component reference such as a UUID or component p> name. --- Code Samples For code samples of LUT-based image manipulation, please check out our [lut-examples](https://gitlab.com/tangram-vision/oss/lut-examples) repository. # Stereo Lookup Table ## Purpose[​](#purpose "Direct link to Purpose") * Create two pixel-wise lookup tables that produce a rectified stereo pair. ## Usage[​](#usage "Direct link to Usage") ``` metrical shape stereo-lut [OPTIONS] \ --dominant \ --secondary \ ``` ## Concepts[​](#concepts "Direct link to Concepts") This command creates two pixel-wise lookup tables to produce a stereo rectified pair based on an existing calibration. Rectification occurs in a hallucinated frame with a translation at the origin of the dominant eye, but with a rotation halfway between the dominant frame and the secondary frame. Rectification is the process of transforming a stereo pair of images into a common plane. This is useful for a ton of different applications, including feature matching and disparity estimation. Rectification is often a prerequisite for other computer vision tasks. The Stereo LUT subcommand will create two lookup tables, one for each camera in the stereo pair. These LUTs are the same format as the ones created by the `lut` command, but they map the pixel-wise shift needed to move each image into that common plane. The result is a pair of rectified images! The Stereo LUT subcommand will also output the values necessary to calculate depth from disparity values. These are included in a separate JSON file in the output directory. ## Examples[​](#examples "Direct link to Examples") #### Create a rectification lookup table between a stereo pair[​](#create-a-rectification-lookup-table-between-a-stereo-pair "Direct link to Create a rectification lookup table between a stereo pair") > ``` > metrical shape stereo-lut --dominant ir_one --secondary ir_two \ > > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. #### \[OUTPUT\_DIR][​](#output_dir "Direct link to \[OUTPUT_DIR]") > The output directory for this shaping operation. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every mode, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -a, --dominant \[DOMINANT][​](#-a---dominant-dominant "Direct link to -a, --dominant \[DOMINANT]") > The dominant eye in this stereo pair. Must be a camera component specifier (UUID or component name) #### -b, --secondary \[SECONDARY][​](#-b---secondary-secondary "Direct link to -b, --secondary \[SECONDARY]") > The secondary eye in this stereo pair. Must be a camera component specifier (UUID or component name) ## Output Schema[​](#output-schema "Direct link to Output Schema") Loading .... [Raw schema file](/assets/files/stereo_rectifier_schema-d45d570d2465e193b64ae683cd18ad5d.json) --- # Minimum Spanning Tree ## Purpose[​](#purpose "Direct link to Purpose") * Create a plex which only features the Minimum Spanning Tree of all spatial components. ## Usage[​](#usage "Direct link to Usage") ``` metrical shape mst [OPTIONS] ``` ## Concepts[​](#concepts "Direct link to Concepts") As defined by [Wikipedia](https://en.wikipedia.org/wiki/Minimum_spanning_tree), a minimum spanning tree (MST) "is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight". In other words, it's the set of connections with the smallest weight that still connects all of the components together. In the case of a plex's spatial constraints, the "weights" for our MST search are the determinants of the covariance of the extrinsic. This is a good measure of how well-constrained a component is to its neighbors. ![An illustration of a plex created through MST command](/assets/images/shape_mst-a5c8caa4574c3d3d8a4bbd62392ea839.png) ## Examples[​](#examples "Direct link to Examples") #### Create a minimum spanning tree plex[​](#create-a-minimum-spanning-tree-plex "Direct link to Create a minimum spanning tree plex") > ``` > metrical shape mst $RESULTS $OUTPUT_DIR > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. #### \[OUTPUT\_DIR][​](#output_dir "Direct link to \[OUTPUT_DIR]") > The output directory for this shaping operation. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every mode, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). --- # Shape Mode ## Purpose[​](#purpose "Direct link to Purpose") * Modify an input plex into any number of different helpful output formats. ## Usage[​](#usage "Direct link to Usage") ``` metrical shape [COMMAND_OPTIONS] ``` ## Concepts[​](#concepts "Direct link to Concepts") Plexes can become incredibly complicated and difficult to parse depending on the complexity of the system you're calibrating. This is where the Shape command comes in handy. Shape modifies a plex into a variety of different and useful configurations. It was created with an eye towards the practical use of calibration data in a deployed system. Some Shape subcommands, like `mst` and `focus`, rely on the [covariance of each spatial constraint](/metrical/core_concepts/constraints.md#spatial-covariance) in the plex to inform the operation. Since the covariance is a measure of uncertainty, we can use it to carve out the "most certain" constraints between two components. ![Covariances of the plex\'s spatial constraints](/assets/images/shape_covariance-ee2d1c8fad41e41cd8fdf8d282c7a0f7.png) Other Shape commands will mutate the plex into something useful for another application, such as the `urdf` command for ROS applications. | Commands | Purpose | | ------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------- | | [`focus`](/metrical/commands/shape/shape_focus.md) | Create a plex in which all components are spatially connected to only one "focus" component, i.e. a "hub-and-spoke" plex. | | [`lut`](/metrical/commands/shape/shape_lut.md) | Create a pixel-wise lookup table for a single camera. | | [`stereo-lut`](/metrical/commands/shape/shape_lut_stereo.md) | Create two pixel-wise lookup tables that produce a rectified stereo pair. | | [`mst`](/metrical/commands/shape/shape_mst.md) | Create a plex from the Minimum Spanning Tree of all spatial constraints. | | [`tabular`](/metrical/commands/shape/shape_tabular.md) | Re-encodes the calibration data in a plex as a series of compressed tables. | | [`urdf`](/metrical/commands/shape/shape_urdf.md) | Create a ROS-compatible URDF from this plex. | --- # Tabular ## Purpose[​](#purpose "Direct link to Purpose") * Exports the intrinsic and extrinsic data in a plex in a simplified table-based format. ## Usage[​](#usage "Direct link to Usage") ``` metrical shape tabular [OPTIONS] ``` ## Concepts[​](#concepts "Direct link to Concepts") When using metrical, the [Plex format](/metrical/core_concepts/plex_overview.md) is preferred because it represents a complete description of the system, including: * Intrinsics as well as their covariance * Spatial constraints that describe Extrinsics as well as their covariance * Temporal constraints that describes the synchronization of various clocks across the system This information is used to configure and initialize calibration, and are leveraged to provide lookups of spatial constraints with the minimum covariance. Other shape modes utilize this information to, for example, produce the [MST](/metrical/commands/shape/shape_mst.md) of the system. The tabular shape subcommand is a subset of the data in a Plex, which contains: 1. The intrinsic information of each component in the source Plex (without covariance). 2. The extrinsic information of each connected component pairing in the source Plex (without covariance). At present, the intrinsic information for camera components also includes the serialized lookup tables (LUTs) similar to what is output by the [LUT](/metrical/commands/shape/shape_lut.md) mode, but instead for every camera in the source Plex, rather than for just a single camera. The purpose of the Tabular subcommand is to provide a simplified description of the calibration and related artefacts (such as the LUTs) that is more suitable for flashing onto devices that may not need the full system description. Generally speaking, the output JSON (or MessagePack binary) will have fields akin to the following: ``` { "first_component_name": { "raw_intrinsics": { // intrinsics model as encoded in the plex }, "lut": { "width": 800, "height": 700, "remapping_columns": [ /* an 800×700 LUT in row-major order */ ], "remapping_rows": [ /* an 800×700 LUT in row-major order */ ] } }, "second_component_name": { "raw_intrinsics": { // intrinsics model as encoded in the plex }, "lut": { "width": 800, "height": 700, "remapping_columns": [ /* an 800×700 LUT in row-major order */ ], "remapping_rows": [ /* an 800×700 LUT in row-major order */ ] } }, "extrinsics_table": [ { "extrinsics": { "rotation": [0, 0, 0, 1], "translation": [3.0, 4.0, 5.0] }, "from": "first_component_name", "to": "second_component_name" }, { "extrinsics": { "rotation": [0, 0, 0, 1], "translation": [-3.0, -4.0, -5.0] }, "from": "second_component_name", "to": "first_component_name" } ] } ``` Aside from the special "extrinsics\_table" member which specifies the list of extrinsics pairs in the plex, each component is treated as a named member of the broader "tabular" format object. ## Examples[​](#examples "Direct link to Examples") #### Create a tabular calibration in the current directory[​](#create-a-tabular-calibration-in-the-current-directory "Direct link to Create a tabular calibration in the current directory") > ``` > metrical shape tabular input_data.results.json . > # To view it > jq -C . input_data.results.tabular.json | less > ``` #### Create a tabular calibration in MsgPack format in the `/calibrations` directory[​](#create-a-tabular-calibration-in-msgpack-format-in-the-calibrations-directory "Direct link to create-a-tabular-calibration-in-msgpack-format-in-the-calibrations-directory") > ``` > metrical shape tabular -f msgpack input_data.results.json /calibrations > ``` #### Create a tabular calibration that only includes `/camera*` topics[​](#create-a-tabular-calibration-that-only-includes-camera-topics "Direct link to create-a-tabular-calibration-that-only-includes-camera-topics") > ``` > metrical shape tabular --filter-component "/camera*" input_data.results.json . > # To view it > jq -C . input_data.results.tabular.json | less > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. #### \[OUTPUT\_DIR][​](#output_dir "Direct link to \[OUTPUT_DIR]") > The output directory for this shaping operation. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every mode, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### -f, --format \[FORMAT][​](#-f---format-format "Direct link to -f, --format \[FORMAT]") > **Default: `json`** > > What serializer to use to format the output. Possible values: > > * json: Output the tabular format as JSON > * msgpack: Output the tabular format as MsgPack #### --filter-component \[FILTER\_COMPONENT][​](#--filter-component-filter_component "Direct link to --filter-component \[FILTER_COMPONENT]") > A set of filters for which components to include. Every filter is a component specifier (name or UUID) applied as if it is given an "OR" relationship with other filters. --- # URDF ## Purpose[​](#purpose "Direct link to Purpose") * Create a ROS-compatible URDF from this plex. ## Usage[​](#usage "Direct link to Usage") ``` metrical shape urdf [OPTIONS] --root-component ``` ## Concepts[​](#concepts "Direct link to Concepts") Create a URDF from a plex. This URDF is compatible with common ROS operations. ## Examples[​](#examples "Direct link to Examples") #### Create a URDF from a plex[​](#create-a-urdf-from-a-plex "Direct link to Create a URDF from a plex") > ``` > metrical shape urdf --root-component ir_one $RESULTS $OUTPUT_DIR > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. #### \[OUTPUT\_DIR][​](#output_dir "Direct link to \[OUTPUT_DIR]") > The output directory for this shaping operation. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every mode, all [universal options](/metrical/commands/commands_overview.md#universal-options) are supported (though not all may be used). #### --use-optical-frame[​](#--use-optical-frame "Direct link to --use-optical-frame") > The `use_optical_frame` boolean determines whether or not to create a link for the camera frame in the URDF representation of the Plex. If `use_optical_frame` is `true` and the component is a camera, a new link for the camera frame is created and added to the URDF. This link represents the optical frame of the camera in the robot model: > > base -> camera (FLU) -> camera\_optical (RDF) > > If `use_optical_frame` is `false`, no such link is created. This might be the case if the optical frame is not needed in the URDF representation, or if it's being handled in some other way: > > base -> camera (FLU) #### -a, --root-component \[ROOT\_COMPONENT][​](#-a---root-component-root_component "Direct link to -a, --root-component \[ROOT_COMPONENT]") > The root component name or UUID to use for the joint transformations in the URDF. --- # Valid Data Formats * MCAP Files * ROS1 Bags * Folders ## MCAP Files[​](#mcap-files "Direct link to MCAP Files") [MCAP](//mcap.dev) is a flexible serialization format that supports a wide range of options and message encodings. This includes the capability to encode ROS1, ROS2/CDR serialized, Protobuf, Flatbuffer, and more. Header Time vs Log Time MetriCal uses the *header timestamp* in each message for synchronization purposes. If your current workflow uses the log time instead, you should make sure that the header timestamp is *also* populated during recording. If not, MetriCal will not be able to synchronize your data correctly. ### Message Encodings[​](#message-encodings "Direct link to Message Encodings") MetriCal only supports a limited subset of the total [well-known message encodings](//mcap.dev/spec/registry#message-encodings) in MCAP. | Message Type | ROS1 Encodings | ROS2 with CDR Serialization | Protobuf Serialization Schemas | | --------------- | ------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | | Image | [sensor\_msgs/Image](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html) | [sensor\_msgs/msgs/Image](//github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/Image.msg) | [RawImage.proto](https://github.com/foxglove/schemas/blob/main/schemas/proto/foxglove/RawImage.proto) | | CompressedImage | [sensor\_msgs/CompressedImage](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CompressedImage.html) | [sensor\_msgs/msgs/CompressedImage](//github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/CompressedImage.msg) | [CompressedImage.proto](https://github.com/foxglove/schemas/blob/main/schemas/proto/foxglove/CompressedImage.proto) | | PointCloud | [sensor\_msgs/PointCloud2](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/PointCloud2.html) | [sensor\_msgs/msgs/PointCloud2](//github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/PointCloud2.msg) | [PointCloud.proto](https://github.com/foxglove/schemas/blob/main/schemas/proto/foxglove/PointCloud.proto) | | IMU | [sensor\_msgs/Imu](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Imu.html) | [sensor\_msgs/msgs/Imu](//github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/Imu.msg) | -- | | H264 Video | -- | -- | [CompressedVideo.proto](https://github.com/foxglove/schemas/blob/main/schemas/proto/foxglove/CompressedVideo.proto) | ### Valid Image Encodings[​](#valid-image-encodings "Direct link to Valid Image Encodings") | Image type | Encoding | | --------------------- | -------------------------------------------------------------------------------------------------------------------------- | | Ordered pixels | `mono8`, `mono16`, `rgb8`, `bgr8`, `rgba8`, `bgra8`, `rgb16`, `bgr16`, `rgba16`, `bgra16` | | 8-bit, multi-channel | `8UC1`, `8UC2`, `8UC3`, `8UC4`, `8SC1`, `8SC2`, `8SC3`, `8SC4` | | 16-bit, multi-channel | `16UC1`, `16UC2`, `16UC3`, `16UC4`, `16SC1`, `16SC2`, `16SC3`, `16SC4` | | 32-bit, multi-channel | `32UC1`, `32UC2`, `32UC3`, `32UC4`, `32SC1`, `32SC2`, `32SC3`, `32SC4` | | 64-bit, multi-channel | `64UC1`, `64UC2`, `64UC3`, `64UC4`, `64SC1`, `64SC2`, `64SC3`, `64SC4` | | Bayer images | `bayer_rggb8`, `bayer_bggr8`, `bayer_gbrg8`, `bayer_grbg8`, `bayer_rggb16`, `bayer_bggr16`, `bayer_gbrg16`, `bayer_grbg16` | | YUYV images | `uyvy`, `UYVY`, `yuv422`, `yuyv`, `YUYV`, `yuv422_yuy2` | ## ROS1 Bags[​](#ros1-bags "Direct link to ROS1 Bags") ROS1 itself is no longer supported. However, MetriCal still supports the reading of ROS1 bags. If you're looking to choose a format compatible with MetriCal today, we do not recommend picking ROS1 bags; instead, we suggest using MCAP (see previous section). ROS1 support is primarily here for those who are already using a ROS1 installation (i.e. for compatibility). ### Message Encodings[​](#message-encodings-1 "Direct link to Message Encodings") | Message Type | ROS1 Encodings | | --------------- | ------------------------------------------------------------------------------------------------------ | | Image | [sensor\_msgs/Image](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html) | | CompressedImage | [sensor\_msgs/CompressedImage](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CompressedImage.html) | | PointCloud | [sensor\_msgs/PointCloud2](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/PointCloud2.html) | | IMU | [sensor\_msgs/Imu](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Imu.html) | ### Valid Image Encodings[​](#valid-image-encodings "Direct link to Valid Image Encodings") | Image type | Encoding | | --------------------- | -------------------------------------------------------------------------------------------------------------------------- | | Ordered pixels | `mono8`, `mono16`, `rgb8`, `bgr8`, `rgba8`, `bgra8`, `rgb16`, `bgr16`, `rgba16`, `bgra16` | | 8-bit, multi-channel | `8UC1`, `8UC2`, `8UC3`, `8UC4`, `8SC1`, `8SC2`, `8SC3`, `8SC4` | | 16-bit, multi-channel | `16UC1`, `16UC2`, `16UC3`, `16UC4`, `16SC1`, `16SC2`, `16SC3`, `16SC4` | | 32-bit, multi-channel | `32UC1`, `32UC2`, `32UC3`, `32UC4`, `32SC1`, `32SC2`, `32SC3`, `32SC4` | | 64-bit, multi-channel | `64UC1`, `64UC2`, `64UC3`, `64UC4`, `64SC1`, `64SC2`, `64SC3`, `64SC4` | | Bayer images | `bayer_rggb8`, `bayer_bggr8`, `bayer_gbrg8`, `bayer_grbg8`, `bayer_rggb16`, `bayer_bggr16`, `bayer_gbrg16`, `bayer_grbg16` | | YUYV images | `uyvy`, `UYVY`, `yuv422`, `yuyv`, `YUYV`, `yuv422_yuy2` | ## Folders[​](#folders "Direct link to Folders") If your system does not use ROS or encode data as an MCAP file at all, you can still use MetriCal by providing it with recursive folders of structured data. This approach is a little easier to get started with compared to using MCAP files, but can leave a lot of performance (and data size, as MCAP files are often compressed-by-default) on the table. ### Message Types[​](#message-types "Direct link to Message Types") | Type | Guidelines | | ----------- | -------------------------------------------------------------------------------------------------- | | Image | Must be in JPEG or PNG format | | Point Cloud | Must be in PCD format. See more on [valid PCD messages](#pointcloud2-messages-and-pcd-files) below | ### Folder format description[​](#folder-format-description "Direct link to Folder format description") MetriCal assumes that the folder layout looks something like the following: ``` . └── data/ <--- passed as $DATA argument in CLI ├── topic_1/ ├── topic_2/ └── topic_3/ ``` ![Calibration Data Folder Structure](/assets/images/cal_data_folder_structure-6fa37a1ad675b48debf0b5c61c77a409.png) Where each directory contains inputs whose file names correspond to timestamps (canonically, in nanoseconds). For example, if we had a message topic named `camera_1`, we might have the following example tree of files: ``` . ├── camera_1/ │   ├── 1643230518150000000.png │   ├── 1643230523197000000.png │   ├── 1643230526125000000.png │   ├── 1643230529419000000.png │   ├── 1643230532161000000.png │   └── 1643230537869000000.png ... ``` The folder names should correspond to the default topic names (e.g. such as in a ROS bag). Every topic does not need to have the same number of messages, or even share exactly the same timestamps as other topics. However, these timestamps will be assumed to be synced according to what is provided in the [plex](/metrical/core_concepts/plex_overview.md). ### PointCloud2 messages and PCD Files[​](#pointcloud2-messages-and-pcd-files "Direct link to PointCloud2 messages and PCD Files") MetriCal has partial support for v0.7 of the [pcd file format](//pointclouds.org/documentation/tutorials/pcd_file_format.html) for reading point cloud data. Since pcd is a very flexible format, we impose the following restrictions on pointclouds we can process: * Fields named "x", "y" and "z" are all present (referred to as Geometric data) * Geometric data fields are floating point (`f32`, `float`) values * One of the fields "intensity", "i", or "reflectivity" is present (referred to as Intensity data) * Intensity data contains data of one of the following types * Unsigned 8 bit integer (`u8`, `uint8_t`) * Unsigned 16 bit integer (`u16`, `uint16_t`) * Unsigned 32 bit integer (`u32`, `uint32_t`) * 32 bit floating point (`f32`, `float`) * Each of the Geometric data and Intensity data fields contains precisely 1 count ### PCD Restrictions[​](#pcd-restrictions "Direct link to PCD Restrictions") In addition to LiDAR / point cloud data needing to include the fields described above, we make some additional restrictions for PCD files when reading in data in the folder format: * The pcd data format is either ascii or binary * Explicitly, "`binary_compressed`" is not yet supported As an example, a pcd with the header below would be supported. ``` # .PCD v.7 - Point Cloud Data file format VERSION .7 FIELDS x y z intensity SIZE 4 4 4 4 TYPE F F F F COUNT 1 1 1 1 WIDTH 213 HEIGHT 1 VIEWPOINT 0 0 0 1 0 0 0 POINTS 213 DATA binary ``` Whereas, a pcd with the header below would not be supported ``` # .PCD v.7 - Point Cloud Data file format VERSION .7 FIELDS x y z intensity SIZE 4 4 4 4 TYPE F F F I COUNT 3 3 3 1 WIDTH 213 HEIGHT 1 VIEWPOINT 0 0 0 1 0 0 0 POINTS 213 DATA binary_compressed ``` --- # Group Management R\&DGrowthEnterprise A new group is automatically created for every account on the Tangram Vision Hub upon signup. Manage group details on the “Group” page. ## Naming Your Group[​](#naming-your-group "Direct link to Naming Your Group") You can create a name for your organization by selecting the pencil icon to the right of the default title “Your Organization Name”. You can always go back and update this name later if you decide to change it. ![Screenshot of where to rename your group](/assets/images/hub_name_group-59e3c6d067cd99274018878dbd981913.png) ## Adding Group Members & Assigning Roles[​](#adding-group-members--assigning-roles "Direct link to Adding Group Members & Assigning Roles") To invite a member to your group, go to the “Invite a New Member” section of the Group administration page. ![Screenshot of the form to invite a new group member](/assets/images/hub_send_invite-179936522f4c5b4741416c4e1222315d.png) Enter the email address of the new group member that you would like to invite. Select their role. Then press the “Send Invite Email” button to send an invitation to the new group member. *Note: Unredeemed invites expire after 7 days. You can always resend an invitation if it hasn’t been redeemed.* About Roles There are two available roles in the Tangram Vision Hub: Admin and Member. Members have basic privileges. Members can: * create or revoke personal licenses * create or revoke group licenses * see other group members Admins have more privileges than members. Admins can do everything members can do, along with the following: * invite new members * remove existing members * edit the role of existing members * edit the organization name * subscribe to services * add or edit billing methods * edit the billing email address ## Managing Group Members[​](#managing-group-members "Direct link to Managing Group Members") If you are a group admin, there are two controls available to manage group members. ### Assigning Roles[​](#assigning-roles "Direct link to Assigning Roles") If you’d like to upgrade a member to an admin, or downgrade an admin to a member, you can do that on the group administration page under the “Members” heading. Each group member has an assigned role that can be changed by selecting the dropdown under the “Role” column. Simply select that dropdown menu in the row of the user whose role you wish to change, and make the change. Once you have changed the role in the dropdown menu, that change is automatically saved. You aren’t allowed to downgrade yourself from admin to member, to prevent a group from becoming admin-less. If you want to be downgraded, ask another admin to demote you. ### Removing Group Members[​](#removing-group-members "Direct link to Removing Group Members") If you wish to remove a member from a group, you can also do that on the group administration page under the “Members” heading. Find the user who you wish to remove from the group, and then click the “Remove From Group” text under the “Actions” column. A dialogue will appear asking you to confirm this decision. Click “OK” to proceed with removing the user from the group, or “Cancel” to cancel the action. IMPORTANT! When you remove a user from your group, you will also remove their personal licenses to use the Tangram Vision software. If you have infrastructure or processes that depend on the user’s personal licenses, they will no longer pass license checks. Please make sure that you want to perform this action before you proceed. ## Joining a Group[​](#joining-a-group "Direct link to Joining a Group") To join a group, you must be invited by a group admin. If you have not received an invitation to join a group, please contact your organization’s group admin and request an invitation. If you have received an invitation to join a group, click the link in the email that says “View your Tangram Vision Hub Invite”. You will be taken to the Tangram Vision Hub login page, where you can create a new account using a Google account, a GitHub account, or your email. Once you are in the Hub, you will be taken to a landing page where you can accept the invitation to join a group. Click “Accept Invite” - now you’ve joined your organization’s group! ![Screenshot of page where you can accept an invitation to a group](/assets/images/hub_accept_invite-35c2ffb348d2b83fd859d5951529be77.png) If you see an error page instead of the “Accept Invite” page, please check the following: * Your invite may have expired or been revoked. Please contact your group admin for a new invite. * You already redeemed this invite. Please check the Group page in the Hub to confirm that you are in the group that you expect. * You signed into the Hub with a different email address than was invited. Please log out and follow the “View your Tangram Vision Hub Invite” link from the invite email again, signing in with the email address that was invited. Should you ever wish to leave the group, contact your admin to request that they remove you from the group. --- # Installation There are two ways to install MetriCal. If you are using Ubuntu or Pop!\_OS, you can install MetriCal via the `apt` package manager. If you are using a different operating system, or if your system configuration needs it, you can install MetriCal via Docker. Unlicensed users have nearly full access to all of MetriCal's commands and features. The only thing they won't be able to do is access the actual calibration values, or output the results of a calibration to JSON. * Apt Repository * Docker ## Install MetriCal via Apt Repository[​](#install-metrical-via-apt-repository "Direct link to Install MetriCal via Apt Repository") Tangram Vision maintains a repository for MetriCal on a private Cloudsmith instance ([cloudsmith.io](https://cloudsmith.io/)). Compatible OS: * Ubuntu 20.04 (Focal Fossa) * Ubuntu and Pop!\_OS 22.04 (Jammy Jellyfish) * Ubuntu and Pop!\_OS 24.04 (Noble Numbat) ### Stable Releases[​](#stable-releases "Direct link to Stable Releases") metrical\_install.sh ``` curl -1sLf \ 'https://dl.cloudsmith.io/public/tangram-vision/apt/setup.deb.sh' \ | sudo -E bash sudo apt update; sudo apt install metrical; ``` ### Release Candidates[​](#release-candidates "Direct link to Release Candidates") metrical\_install\_rc.sh ``` curl -1sLf \ 'https://dl.cloudsmith.io/public/tangram-vision/apt-rc/setup.deb.sh' \ | sudo -E bash sudo apt update; sudo apt install metrical; ``` ## Use MetriCal with Docker[​](#use-metrical-with-docker "Direct link to Use MetriCal with Docker") Every version of MetriCal is available as a Docker image. Even visualization features can be run from Docker using Rerun as a separate process. For Docker When you see a section like this in the documentation, it refers to a Docker-specific feature or method. Keep an eye out for them! ### 1. Install Docker[​](#1-install-docker "Direct link to 1. Install Docker") If you do not have Docker installed, follow the instructions to do so at ### 2. Download MetriCal via Docker[​](#2-download-metrical-via-docker "Direct link to 2. Download MetriCal via Docker") There are two types of MetriCal releases. All releases can be found [listed on Docker Hub](https://hub.docker.com/r/tangramvision/cli/tags). #### Stable Release[​](#stable-release "Direct link to Stable Release") Stable releases are an official version bump for MetriCal. These versions are verified and tested by Tangram Vision and MetriCal customers. They are guaranteed to have a stable API and follow SemVer. Find documentation for these releases under their version number in the nav bar. Stable releases can be pulled using the following command: ``` docker pull tangramvision/cli:latest ``` Install a specific version with a command like: ``` docker pull tangramvision/cli:13.0.0 ``` #### Release Candidates[​](#release-candidates-1 "Direct link to Release Candidates") Release candidates are versions of MetriCal that include useful updates, but are relatively untested or unproven and could contain bugs. As time goes on, release candidates will either evolve into stable releases or be replaced by newer release candidates. The latest release candidate can be referenced with the `dev-latest` alias, and can be installed by running the following: ``` docker pull tangramvision/cli:dev-latest ``` Specific release candidates can be installed the same way you'd install any other versioned release (see above). *** With that, you should now have a MetriCal instance on your machine! We'll assume the Stable release (`tangramvision/cli:latest`) for the rest of the introduction. ### 3. Create a MetriCal Docker Alias[​](#3-create-a-metrical-docker-alias "Direct link to 3. Create a MetriCal Docker Alias") Throughout the documentation, you will see references to `metrical` in the code snippets. This is a named bash function describing a larger docker command. For convenience, it can be useful to include that function (outlined below) in your script or shell config file (e.g. `~/.bashrc`): \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ "$@"; } ``` Now you should be able to run `metrical` wherever! `--volume` and `--workdir` The `--volume` flag syntax represents a bridge between the host machine and the docker instance. If your data is contained within the directory `/home/user/datasets`, then you would replace `$MOUNT` with `/home/user/datasets`. `--workdir` indicates that we're now primarily working in the `/datasets` directory within the docker container. All subsequent MetriCal commands are run as if from that `/datasets` directory. --- # License Creation R\&DGrowthEnterprise License Deprecation With MetriCal v14 (released in May 2025), version 1 license keys (prefixed with `key/`) have been **deprecated and will no longer work after November 1st, 2025**. If you use license keys prefixed with `key/`, please create new license keys (which will be version 2 keys, prefixed with `key2/`) and use them instead. ## Why Get A License?[​](#why-get-a-license "Direct link to Why Get A License?") MetriCal is available for download and evaluation without a license requirement. The software provides comprehensive functionality including data processing and metrics generation in unlicensed mode. However, accessing and exporting calibration results requires a valid license key. Tangram Vision Hub All of the following happens in the Tangram Vision Hub. If you don't have an account, sign up for one at [hub.tangramvision.com](https://hub.tangramvision.com). ## Personal Licenses[​](#personal-licenses "Direct link to Personal Licenses") To create a license, navigate to your “Account” page in the Hub. In the Personal Licenses section, you will be able to create a new license. First, choose a name for the license. A good idea is to associate the license name with the specific device it will be used with. Once you have added a name, click “Create” and a new license will be created. ![Screenshot of personal licenses table and form with a license and usage instructions shown](/assets/images/hub_personal_licenses-ae988d537659326cbc20427d37a51202.png) Upon creating a license, you will see the license key along with a brief explanation of the different ways that you can provide the license key to MetriCal, along with a [link to more detailed instructions](/metrical/configuration/license_usage.md). To revoke a personal license, click the “REVOKE” button in the “Actions” tab. Running Tangram Vision software with that license will thereafter return an error. If a user leaves or is removed from a group, their personal licenses are automatically revoked. ## Group Licenses[​](#group-licenses "Direct link to Group Licenses") Unlike personal licenses, group licenses are not tied to a particular user and will not be automatically revoked if the license’s creator leaves or is removed from the group. To create a license, navigate to the “Group” page in the Hub. In the Group Licenses section, you will be able to create a new license. First, choose a name for the license. A good idea is to associate the license name with the specific device or process it will be used with. Once you have added a name, click “Create” and a new license will be created. Role Required Users in both “admin” and “member” roles can create and revoke group licenses. ![Screenshot of group licenses table and form with a license and usage instructions shown](/assets/images/hub_group_licenses-59f4d3ef0886b0b356cda04ab47b3c77.png) To revoke a group license, click the “REVOKE” button in the “Actions” tab. Running Tangram Vision software with that license will thereafter return an error. --- # License Usage License Deprecation With MetriCal v14 (released in May 2025), version 1 license keys (prefixed with `key/`) have been **deprecated and will no longer work after November 1st, 2025**. If you use license keys prefixed with `key/`, please create new license keys (which will be version 2 keys, prefixed with `key2/`) and use them instead. MetriCal licenses are user-specific rather than machine-specific. A license key can be utilized on any system with internet connectivity that can establish a connection to Tangram Vision's authentication servers. For environments with limited connectivity, offline licensing options are available (detailed below). ## Using a License Key[​](#using-a-license-key "Direct link to Using a License Key") R\&DGrowthEnterprise MetriCal looks for license keys in 3 places, in this order: ### 1. Command Line Argument[​](#1-command-line-argument "Direct link to 1. Command Line Argument") Provide the key as an argument to the `metrical` command. metrical\_runner.sh ``` metrical --license="key2/" calibrate ... ``` For Docker This command line argument can be included directly in the `metrical` shell function by adding it before the `"$@"` line of your alias. \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ # Note the following line! --license="key2/" \ "$@"; } ``` ### 2. Environment Variable[​](#2-environment-variable "Direct link to 2. Environment Variable") Provide the key as a string in the environment variable `TANGRAM_VISION_LICENSE`. metrical\_runner.sh ``` TANGRAM_VISION_LICENSE="key2/" metrical calibrate ... ``` For Docker #### Using Environment Variables in Docker[​](#using-environment-variables-in-docker "Direct link to Using Environment Variables in Docker") If running MetriCal via Docker, the environment variable must be set *inside* the container. The [docker run documentation](https://docs.docker.com/reference/cli/docker/container/run/#env) shows various methods for setting environment variables inside a container. One example of how you can do this is to add an `--env` flag to the `docker run` invocation inside the `metrical` shell function that was shown above, which would then look like this: \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ # Note the following line! --env=TANGRAM_VISION_LICENSE="key2/" \ tangramvision/cli:latest \ "$@"; } ``` ### 3. Config File[​](#3-config-file "Direct link to 3. Config File") Provide the key as a string in a config TOML file, assigned to a top-level `license` key. This key should be placed in your config directory at `~/.config/tangram-vision/config.toml`. \~/.config/tangram-vision/config.toml ``` license = "key2/{your_key}" ``` For Docker #### Using a Config File in Docker[​](#using-a-config-file-in-docker "Direct link to Using a Config File in Docker") To use a config file in Docker, you’ll need to modify the `metrical` shell function by mounting the config file to the expected location. Use the following snippet, making sure to update `path/to/config.toml` to point to your config file. \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ # Note the following line! --volume=path/to/config.toml:/.config/tangram-vision/config.toml:ro \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ "$@"; } ``` ## Using a License Key Offline[​](#using-a-license-key-offline "Direct link to Using a License Key Offline") R\&DGrowthEnterprise MetriCal can validate a license via a local license-cache file, ensuring that internet hiccups don't cause license validation failures that interrupt critical calibration processes. In order to use MetriCal without an active internet connection, you must run any MetriCal command with an active internet connection once. (This can be as simple as running `metrical calibrate foo bar baz`, even if `foo`, `bar`, and `baz` files do not exist.) This will create a license-cache file that is valid (and enables offline usage of MetriCal) for 1 week. Every time MetriCal is run with an active connection, the license-cache file will be refreshed and valid for 1 week. If the license-cache file hasn't been refreshed in more than a week and MetriCal is run offline, it will exit with a "License-cache file is expired" error. For Docker Include an additional volume mount when running the docker container, so a license-cache file can persist between MetriCal runs. Update the `metrical` shell function to include the `--volume=metrical-license-cache:...` line shown below: \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ # The following line enables offline licensing --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ "$@"; } ``` --- # Visualization It's so nice to see your data! MetriCal provides visualization features to help you understand exactly what's going on with your calibrations, so you're never left in the dark. * Visual inspection of detections during the calibration process * Ability to verify the spatial alignment of different sensors * Confirmation that your calibration procedure is capturing the right data * Immediate visual feedback on calibration quality By properly setting up and using the visualization features in MetriCal, you can gain greater confidence in your calibration results and more easily troubleshoot any issues that arise during the calibration process. ## Setting up Visualization with Rerun[​](#setting-up-visualization-with-rerun "Direct link to Setting up Visualization with Rerun") MetriCal uses [Rerun](https://www.rerun.io) as its visualization engine. There are two ways MetriCal interacts with Rerun: * Option 1: MetriCal can spawn a Rerun server and connect to it directly. This is the default behavior. * Option 2: MetriCal can connect to another Rerun server running elsewhere (on host, in network, etc). For Docker When using MetriCal via Docker, you'll need to run a separate Rerun server on your host machine. ### Installing Rerun[​](#installing-rerun "Direct link to Installing Rerun") Install Rerun on your host machine using either `pip` or `cargo`: ``` # Option 1: via pip pip install rerun-sdk==0.20 # Option 2: via cargo cargo install rerun-cli --version ^0.20 ``` Match Versions! Make sure to install Rerun version 0.20 to ensure compatibility with MetriCal's current version. Rerun is a great tool, but it's still in heavy development, and there's no guarantee of backwards compatibility. For Docker ### Run Separate Rerun Server[​](#run-separate-rerun-server "Direct link to Run Separate Rerun Server") Before using any visualization in MetriCal, start a Rerun rendering server in a separate terminal: ``` rerun --memory-limit=1GB ``` Then, ensure your `docker run` command includes the host gateway configuration: ``` --add-host=host.docker.internal:host-gateway ``` This allows the Docker container to communicate with Rerun running on your host machine. ## Visualization in Different Modes[​](#visualization-in-different-modes "Direct link to Visualization in Different Modes") ### Display Mode[​](#display-mode "Direct link to Display Mode") The Display command is designed specifically for visualization, allowing you to see the applied results of your calibration. ``` metrical display [OPTIONS] $INPUT_DATA_PATH $PLEX_OR_RESULTS_PATH ``` The Display command visualizes the calibration results applied to your dataset in real-time, providing a quick "ocular validation" of your calibration quality. Read more about the [Display command here](/metrical/commands/display.md). ### Calibrate Mode[​](#calibrate-mode "Direct link to Calibrate Mode") In Calibrate mode, you can visualize the detection process and calibration data by adding the `--render` flag: ``` metrical calibrate \ --render \ --output-json results.json \ $DATA $PLEX $OBJSPC ``` This allows you to see detections in real-time as the calibration process runs. Learn more about the [Calibrate command here](/metrical/commands/calibrate.md). ## Advanced Visualization Options[​](#advanced-visualization-options "Direct link to Advanced Visualization Options") ### Custom Render Socket[​](#custom-render-socket "Direct link to Custom Render Socket") If you're running Rerun on a non-default port or IP address, use the `--render-socket` option: ``` metrical display --render-socket="127.0.0.1:3030" $DATA $RESULTS ``` By default: * Docker setup: `host.docker.internal:9876` * Local setup: `127.0.0.1:9876` When running Rerun from its CLI, the IP would correspond to its `--bind` option and the port would correspond to its `--port` option. ## Troubleshooting Visualization[​](#troubleshooting-visualization "Direct link to Troubleshooting Visualization") If you're having trouble with visualization: 1. Make sure Rerun is running and listening on the correct port 2. Verify that you've included the `--add-host=host.docker.internal:host-gateway` flag when running MetriCal via Docker 3. Check that you're using a compatible version of Rerun (v0.20) 4. Try specifying the render socket explicitly with `--render-socket` --- # Components A ***component*** is an atomic sensing unit (for instance, a camera) that can output zero or more streams of observations. The observations that it produces inform its type. ## Common Features[​](#common-features "Direct link to Common Features") Every component contains some common fields. These are primarily used for identification within the Plex. | Field | Type | Description | | ----- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | UUID | `Uuid` | A universally unique identifier for the component. | | Name | `String` | A name to reference the component. These must be unique, and MetriCal will treat them as the "topic" name to match within the provided dataset. | ## Component Kinds[​](#component-kinds "Direct link to Component Kinds") ### Camera[​](#camera "Direct link to Camera") Cameras are a fundamental visual component. In addition to the common fields that every component contains, they are defined by the following types: | Field | Type | Description | | ----------- | -------------------------- | ------------------------------------------------------------------------------------ | | Intrinsics | A camera intrinsics object | Intrinsic parameters that describe the camera model. | | Covariance | Matrix of floats | An n×n covariance matrix describing the variance-covariance of intrinsic parameters. | | Pixel pitch | float | The metric size of a pixel in real space. If unknown, should be equal to 1.0. | Pixel pitch units It is common practice to leave most observations and arithmetic in units of pixels when dealing with image data. However, this practice can get confusing when trying to compare two different camera types, as "1 pixel" may not equate to the same metric error between cameras. Pixel pitch allows us to compare cameras using a common unit, i.e. units-per-pixel. Note that we leave the unit ambiguous here, as this field is primarily for making analysis easier on human eyes. Common units include microns-per-pixel (μm / pixel) and meters-per-pixel (m / pixel). #### Intrinsic Modeling[​](#intrinsic-modeling "Direct link to Intrinsic Modeling") "Intrinsics" can refer to different models depending on the lens type. MetriCal provides the following intrinsics models for cameras: | Model Name | Description | | ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | | `"no_distortion"` | [No distortion model applied](/metrical/calibration_models/cameras.md#no-distortion), i.e. an ideal pinhole model | | `"opencv_radtan"` | [OpenCV RadTan](/metrical/calibration_models/cameras.md#opencv-radtan) | | `"opencv_fisheye"` | [OpenCV Fisheye](/metrical/calibration_models/cameras.md#opencv-fisheye) | | `"opencv_rational"` | [OpenCV Rational](/metrical/calibration_models/cameras.md#opencv-rational), an extension of OpenCV RadTan with radial terms in the denominator | | `"pinhole_with_brown_conrady"` | [Inverse Brown-Conrady](/metrical/calibration_models/cameras.md#pinhole-with-inverse-brown-conrady), with correction in image space | | `"pinhole_with_kannala_brandt"` | [Inverse Kannala-Brandt](/metrical/calibration_models/cameras.md#pinhole-with-inverse-kannala-brandt), with correction in image space | | `"eucm"` | [Enhanced Unified Camera Model](/metrical/calibration_models/cameras.md#eucm), i.e. EUCM | | `"double_sphere"` | [Double Sphere](/metrical/calibration_models/cameras.md#double-sphere) | | `"omni"` | [Omnidirectional](/metrical/calibration_models/cameras.md#omnidirectional-omni) | | `"power_law"` | [Power Law](/metrical/calibration_models/cameras.md#power-law) | One might choose each of these models based on their application, or dependent upon what software they wish to be compatible with. In many cases, the choice of model may not matter as much as the data capture process. If you're having trouble deciding which model will fit your system best, [contact us](mailto:support@tangramvision.com) and we can help you understand the differences that will matter to you! *** ### LiDAR[​](#lidar "Direct link to LiDAR") MetriCal's categorization of LiDAR comprises a variety of similar yet slightly-different sensors. Some examples could be: * A Velodyne VLP-16 scanner * An Ouster OS2 scanner * A Livox HAP scanner * etc. All of the above LiDAR can be represented as LiDAR components within MetriCal. #### Intrinsic Modeling[​](#intrinsic-modeling-1 "Direct link to Intrinsic Modeling") There is no intrinsics model for LiDAR in MetriCal. Hence, the only model available is: | Model Name | Description | | ---------- | ---------------------------------------------------- | | `"lidar"` | Extrinsics calibration of lidar to other modalities. | *** ### IMU[​](#imu "Direct link to IMU") Inertial Measurement Units (IMU) measure specific-force (akin to acceleration) and rotational velocity. This is done through a combination of accelerometer and gyroscopic sensors. IMUs in MetriCal are a combined form of component as MEMS IMUs are rarely able to atomically and physically separate the accelerometer and gyroscope measurements. Like other component types, aside from the common fields that every component contains, IMUs are defined by the following types: | Field | Type | Description | | -------------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- | | Bias | An IMU bias | [IMU bias parameters](/metrical/calibration_models/imu.md#imu-bias) | | Intrinsics | An IMU intrinsics object | [IMU intrinsics model](/metrical/calibration_models/imu.md#imu-model-descriptions) | | Noise Parameters | An IMU noise parameters object | [Noise parameters](/metrical/calibration_models/imu.md#noise-parameters) | | Bias Covariance | A matrix of floats | A 6×6 covariance matrix describing the variance-covariance of the IMU bias parameters. | | Intrinsic Covariance | A matrix of floats | A n×n covariance matrix describing the variance-covariance of the IMU intrinsic parameters, with the value of n dependent on the intrinsics model used. | #### Intrinsic Modeling[​](#intrinsic-modeling-2 "Direct link to Intrinsic Modeling") The following models are provided as part of MetriCal: | Model Name | Description | | -------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | `"scale"` | [Scale model](/metrical/calibration_models/imu.md#imu-model-descriptions) | | `"scale_shear"` | [Scale and shear model](/metrical/calibration_models/imu.md#imu-model-descriptions) | | `"scale_shear_rotation"` | [Scale, shear and rotation model](/metrical/calibration_models/imu.md#imu-model-descriptions) | | `"scale_shear_rotation_g_sensitivity"` | [Scale, shear, rotation and g-sensitivity model](/metrical/calibration_models/imu.md#imu-model-descriptions) | --- # Constraints A ***constraint*** is a spatial, temporal, or semantic relation between any two components. In the context of interpreting plexes as a multigraph of relationships over our system (comprised of different components), constraints are the edge-relationships of the graph. ![Constraint Types](/assets/images/constraint_types-399e400434ab725f4144b7e8be83a488.png) ## Conventions[​](#conventions "Direct link to Conventions") ### "To" and "From"[​](#to-and-from "Direct link to \"To\" and \"From\"") Constraints are always across two components. MetriCal will often refer to each of these components in a directional sense using the "from" and "to" specifiers, which reference the UUIDs of the components. This directional specifier is useful for spatial and temporal constraints, because it allows us to know how the extrinsics from the spatial constraints, or the synchronization data from the temporal constraints can be applied to data to put two observations into the same frame of reference. Our extrinsics type is essentially a way to describe how to transform points in one coordinate system into another. Anyone who has ever worked with transforms has experienced confusion in convention. In order to cut through the ambiguity of extrinsics, every spatial constraint has a `from` and `to` field. Let's dive into how this works. We can think of an extrinsics transform between components A and B using the following notation: ΓBA​:=Γfrom Bto A​ If we wanted to move a point p from the frame of reference of component B to that of A, we would use the following math: p​WA​=ΓBA​⋅p​WB​ ...also read as "pA​ equals pB​ by transforming to A from B". Thus, when constructing a spatial constraint, **the reference frame for the extrinsics transform is in the coordinate frame of component A**, and would move points from the coordinate frame of component B. Similar examples can be made for converting timestamps from e.g. B into the same "clock" as A using temporal constraints. ### Coordinate Bases in MetriCal[​](#coordinate-bases-in-metrical "Direct link to Coordinate Bases in MetriCal") It's common to represent a transform in a certain convention, such as FLU (X-Forward, Y-Left, Z-Up) or RDF (X-Right, Y-Down, Z-Forward). One might then wonder what the default coordinate system is for MetriCal. Short answer: it entirely depends on the data you're working with. In MetriCal, spatial constraints are designed to transform *observations* (not components!) from the `from` frame to the `to` frame. In the case of a camera-LiDAR extrinsics transform ΓLC​, the solved extrinsics will move LiDAR observations pL​ (which may be in FLU) to a camera observations's coordinate frame (which may be in RDF): pC​=ΓLC​⋅pL​ This makes it simple to move observations from one component to another for sensor fusion tasks. This is what is meant when MetriCal is said to have no "default" component coordinate system; it operates directly on the provided observations! However, for those users that *really need* the extrinsics in a specific coordinate basis, MetriCal provides. There are two relevant global options that can be passed to any command that produces or displays a calibrated plex ([Calibrate](/metrical/commands/calibrate.md), [Shape](/metrical/commands/shape/shape_overview.md), and [Report](/metrical/commands/report.md)). These are: * `--topic-to-observation-basis` (or `-z`): Assign a topic a specific coordinate basis for its *observations*, i.e. the data it produces. For cameras, this is usually RDF. * `--topic-to-component-basis` (or `-Z`): Assign a topic a specific coordinate basis for its *component*, i.e. the sensor itself. If both of these options are set for every component, MetriCal will transform the extrinsics into the proper basis for you! Let's take a look at an example: ``` metrical calibrate -z ir_cam:RDF -z lidar:FLU -Z *:FLU $DATA $PLEX $OBJ ``` Here, we're telling MetriCal that the observations from the camera topic are in RDF, while the data from our LiDAR is in FLU. We're also telling MetriCal that we need the output of the entire plex to be in FLU, probably because our system expects that basis natively. In this case, MetriCal will produce two plexes in the results JSON: * `results.plex` - The first plex is what is natively produced by the calibration process, with extrinsics in the basis of the observations. * `results.changed_basis_plex` - The second plex is the same as the first, but with all extrinsics transformed into the desired component bases. Note that this second plex is for your use only; if you use this in Init or Calibrate mode, MetriCal will probably start a little confused. It needs `results.plex` for proper operation. ## Spatial Constraints[​](#spatial-constraints "Direct link to Spatial Constraints") It is common to ask for the spatial relationship or extrinsics between two given components. A Plex incorporates this information in the form of what is called *spatial constraints*. A spatial constraint can be broken down into: | Field | Type | Description | | ---------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | Extrinsics | An extrinsics object | The extrinsics describing the "*To*" from "*From*" transformation. | | Covariance | A matrix of floats | The 6×6 covariance of the extrinsics described by this constraint. | | From | UUID | The UUID of the component that describes the "*From*" or base coordinate frame. | | To | UUID | The UUID of the component that describes the "*To*" coordinate frame, which we are transforming into. This can be considered the "origin" of the extrinsics matrix | For a single-camera system, a plex is a simple affair. For a complicated multi-component system, plexes can become incredibly complex and difficult to parse. Unlike other calibration systems, MetriCal creates fully connected graphs whenever it can. Everything is related! ![A perfectly reasonable plex](/assets/images/shape_plex-695860bea6def4a922127eff146cdd04.png) ### Spatial Covariance[​](#spatial-covariance "Direct link to Spatial Covariance") Spatial covariance is generally presented as a 6×6 matrix relating the variance-covariance of an se3 lie group: \[v1​​v2​​v3​​ω1​​ω2​​ω3​​] When traversing for spatial constraints within the Plex, the constraint returned will always contain the extrinsic with the *minimum overall covariance*. This ensures that users will always get the extrinsic that has the smallest covariance (thus, the highest confidence / precision), even if multiple spatial constraints exist between any two components. ![Covariances of the plex\'s spatial constraints](/assets/images/shape_covariance-ee2d1c8fad41e41cd8fdf8d282c7a0f7.png) Spatial Constraint Traversal - Python Example Since all spatial constraints in a plex form an undirected (maybe even fully connected) graph, it can be confusing to figure out how to traverse that graph to find the "best" extrinsics between two components. MetriCal itself provides the [Shape](/metrical/commands/shape/shape_overview.md) command to help with this (with a few helpful options in the [Report](/metrical/commands/report.md) command), but sometimes it's useful to do your own thing. To help would-be derivers, the [spatial\_constraint\_traversal](https://gitlab.com/tangram-vision/oss/spatial_constraint_traversal) repository demonstrates the right way to derive optimal extrinsics straight from the Plex JSON via the magic of python. ## Temporal Constraints[​](#temporal-constraints "Direct link to Temporal Constraints") Time is a tricky thing in perception, but of crucial importance to get right. We've developed our temporal constraint to be flexible enough to describe many of the most common timing scenarios between components. | Field | Type | Description | | --------------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------- | | Synchronization | A synchronization object | The strategy to achieve known synchronization between these two components in the Plex. | | Resolution | float | The resolution to which synchronization should be applied. | | From | UUID | The UUID of the component that the synchronization strategy must be applied to. | | To | UUID | The UUID of the component whose clock we synchronize into by applying our synchronization strategy (to the `from` component). | ### The Problem With Clocks[​](#the-problem-with-clocks "Direct link to The Problem With Clocks") In the world of hardware, measuring time can be a challenge. Two clocks might differ in several different ways; without taking these nuances into account, many higher-level perception tasks can fail. Let's take the example below: two different clocks, possibly from two different hosts, that might be informing separate components in our plex. ![Two clocks out of sync](/assets/images/time_const_intro-49859706a6dd80d4f31c5b7cbd6b7f7f.png) Temporal constraints can balance these different clocks across a plex in order to make sure time confusion never occurs. It achieves this through ***Synchronization***. ### Synchronization[​](#synchronization "Direct link to Synchronization") Synchronization describes the following relationship between two clocks: Cto​=(1e9+skew)⋅Cfrom​+offset | Field | Type | Description | | ------ | ------- | --------------------------------------------------------------------- | | offset | Integer | The epoch offset between two clocks, in units of integer nanoseconds. | | skew | Integer | The scale offset between two clocks. Unitless. | #### Offset[​](#offset "Direct link to Offset") Unless two components are using the same clock, there's a chance that they are offset in time. This means that time *t* in one clock does not align with time *t* in the other. Fixing this is rather simple: just shift the time values in the `from` clock by the `offset` parameter until their two *t* times match. ![Appplying offset to a clock](/assets/images/time_const_offset-9714fb97fbee94e31b853c1448da7f60.png) #### Skew[​](#skew "Direct link to Skew") Skew compensates for the difference in the duration of a time increment between two clocks. In other words, a second in one clock might be a different length than a second in another! These differences can be very subtle, but they will result in some unwanted drift. Applying `skew` to a `from` clock's timestamps will match the duration of a second to that of the `to` clock. ![Appplying skew to a clock](/assets/images/time_const_skew-323f3c9e0b0dbf4682327f80fd36083a.png) Between `skew` and `offset`, we have the tools we need to synchronize time between two clocks! Note that components that use the same host clock will need no synchronization; their `skew` and `offset` remains `0.0`. Further Reading MetriCal has adopted the terminology from [this paper](//www.iol.unh.edu/sites/default/files/knowledgebase/1588/clock_synchronization_terminology.pdf) from the University of New Hampshire's InterOperability Laboratory. ### Resolution[​](#resolution "Direct link to Resolution") ***Resolution*** helps MetriCal identify observations that are meant to be synchronized between two components. Say we have two camera components. The first is producing one image every 5 seconds; the second produces a new image every 1.3 seconds. We want to pair up observations from the two separate streams that we know are uniquely synced in time as a one-to-one set. Our resolution tells the Platform how far from an observation we want to search for a synced pair. In the case of our first camera, we know that one new frame comes every 5 seconds. This means that there's a span of 2.5 seconds on either side of this image that could hold a matching observation from our second camera. So, we set `resolution` to `2.5 * 1e9` (for nanoseconds). The Platform will then look for any synced observation candidates in camera two and find the observation that matches most closely in time to the image in camera one. All that being said, resolution is a concept better shown than told: ![Applying resolution to an observation series](/assets/images/time_const_resolution-b8cbb2b742bbdb842bb95dd6981318b4.png) If one is confident that two observation streams are in-sync, one may set the resolution to be fairly small. However, given the way some components can behave, it's generally not necessary or recommended. ## Semantic Constraints[​](#semantic-constraints "Direct link to Semantic Constraints") Semantic constraints are a bit different from spatial and temporal constraints. While spatial and temporal constraints exist to model the physical realities of components in a system, semantic constraints exist to model relationships in the system that don't fall within those boundaries. Semantic constraints are defined by the following fields: | Field | Type | Description | | ---------- | --------------------------- | -------------------------------------------------------------------------------------------------------------------- | | Components | An array of component UUIDs | A set of unique UUIDs (corresponding to components that exist within the plex) that are grouped under this semantic. | | Name | String | The name for these semantics. Usually indicates the purpose or function of the group. | | UUID | UUID | The unique identifier for the semantic constraint. | Semantic constraints can label subplexes within a plex, label individual OEM pieces of hardware (e.g. labeling a single Intel RealSense device), or to group together components that share some common function (e.g. grouping a stereo pair of cameras together). Some semantic constraints can be used by MetriCal to generate unique kinds of metrics after calibration. These are identified purely through the semantic constraint's `name` field. For example, MetriCal currently understands the following semantic constraint kinds: * `"stereo_pair"`: This denotes that two cameras are part of a stereo pair. This also tells MetriCal to compute stereo rectification metrics after a calibration is complete. --- # Covariance Both components and constraints include ***covariance***, a measure of uncertainty in a Plex. The inclusion of covariance as a core idea is one of the biggest differentiators of MetriCal from competing calibration software. ## The Role Of Covariance[​](#the-role-of-covariance "Direct link to The Role Of Covariance") Many calibration pipelines will have some notion of whether or not some parameter is "fixed" or "variable." Fixed parameters are treated as being *perfectly observed*: * Their values are known * They have no error * They are not optimized during a calibration Variable parameters are the opposite by being *perfectly unobserved*: * Their values are not known at all * They may have any amount of error * They are always optimized during a calibration This creates a dichotomy where we either have zero (0%) information about some quantity, or we have perfect information (100%) about some quantity. This is simply not true; most of the time, there is a reasonable and quantifiable amount of uncertainty in any given system. As you might have guessed, ***covariance*** provides a method of modeling this uncertainty. ## Describing Covariance[​](#describing-covariance "Direct link to Describing Covariance") While we may not know the *exact* values of every parameter in our calibration, we can typically make an educated guess. It is common practice to state what we know about a parameter like this: **`` is 1000.0 ``, ± 0.010 ``**. This ± 0.010 tolerance gives us a way to initialize a parameter's ***variance-covariance*** (shortened to just *covariance*). Many of MetriCal's processes take this information into account as a statistical prior. Rather than "fixing" any of our quantities, we update these values through the optimization process. This guarantees that we will never converge to values with variance / precision that is worse than what is specified by our priors. warning When using covariance, know that this value is standard deviation *squared*. In our example above, if our standard deviation is ± 0.010 units, then our covariance is (± 0.010)2 units2. MetriCal incorporates the concept of covariance for all observable quantities in our calibration process. This does add some complexity to the system, but provides the benefit of statistical rigor in our calibration pipeline. --- # Object Space An ***Object Space*** refers to a known set of features in the environment that is observed in our calibration data. This often takes the form of a target marker, a calibration target, or similar mechanism. This is one of the main user inputs to MetriCal, along with the system's [Plex](/metrical/core_concepts/plex_overview.md) and [calibration dataset](/metrical/configuration/data_formats.md). One of the most difficult problems in calibration is that of cross-modality data correlation. A camera is fundamentally 2D, while Lidar is 3D. How do we bridge the gap? By using the right object space! Many seasoned calibrators are familiar with the checkerboards and grids that are used for camera calibration; these are object spaces as well. Object spaces as used by MetriCal help define the parameters needed to accurately and precisely detect and manipulate features seen across modalities in the environment. This section serves as a reference for the different kinds of object spaces, detectors, and features supported by MetriCal; and perhaps more importantly, how to combine these object spaces. ## Diving Into The Unknown(s)[​](#diving-into-the-unknowns "Direct link to Diving Into The Unknown(s)") Calibration processes often use external sources of knowledge to learn values that a component (a camera, for instance) couldn't derive on its own. Continuing the example with cameras as our component of reference — there's no sense of scale in a photograph. A photo of a mountain could be larger than life, or it could be a diorama; the camera has no way of knowing, and the image unto itself has no way to communicate the metric scale of what it contains. This is where object spaces come into play. If we place a target with known metric properties in the image, we now have a reference for metric space. ![Target in object space](/assets/images/object_space-16b273aad98aee5c8e4675512771cea6.png) Most component types require some target field like this for proper calibration. For LiDAR we use the a [circular target](/assets/files/lidar_circle_target-a6b529718233d9272cc14e53ea0b886a.pdf) comprising a checkerboard and some retroreflective tape; Similarly, for cameras MetriCal supports a whole host of different checkerboards and signalized markers. Each of these targets is referred to as an object space. ## Object Spaces are Optimized[​](#object-spaces-are-optimized "Direct link to Object Spaces are Optimized") In MetriCal, even object space points have covariance! This reflects the imperfection of real life; even the sturdiest target can warp and bend, which will create uncertainty. We embed this possibility in the covariance value of each object space point. That way, you can have greater certainty of the results, even if your target is not the perfectly "flat" or idealized geometric abstraction that is often assumed in calibration software. One of the most unique capabilities of MetriCal is the ability to optimize object space. With MetriCal, it is possible to calibrate with boards that are imperfect without inducing projective compensation errors back into your [final results](/metrical/results/report.md). ### Multi-Target Calibrations? No Problem.[​](#multi-target-calibrations-no-problem "Direct link to Multi-Target Calibrations? No Problem.") In addition to MetriCal's ability to optimize the object space, MetriCal can also optimize across multiple object spaces. By specifying multiple targets in a scene, it becomes possible to calibrate complex scenarios that wouldn't otherwise be feasible. For example, MetriCal can optimize an extrinsic between two cameras with zero overlap if multiple object spaces are used. ## Object Space Structure[​](#object-space-structure "Direct link to Object Space Structure") Like our [plex](/metrical/core_concepts/plex_overview.md) structure, object spaces are serialized as JSON objects or files and are passed into MetriCal for many of its different modes. Object Space Schema Note that only users with an advanced / unusual setup will need to author their own object spaces as described below. For the majority of use cases, our [premade targets](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) and the target selection wizard will automatically account for any gotchas and provide you with both valid targets and a well-formed object space file. | Field | Type | Description | | ---------------------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `object_spaces` | A map of UUID to object space objects | The collection of object spaces (identified by UUID) to use. This comprises detector/descriptor pairs for each object space. | | `spatial_constraints` | An array of spatial constraints | (Optional) Spatial constraints between multiple object spaces, if applicable. Learn more in ["Using Multiple Targets"](/metrical/targets/multiple_targets.md) | | `mutual_construction_groups` | A set of object space UUIDs | (Optional) Sets of object spaces that exist as a mutual construction. Learn more in ["Using Multiple Targets"](/metrical/targets/multiple_targets.md) | ### Full Schema[​](#full-schema "Direct link to Full Schema") Loading .... [Raw schema file](/assets/files/object_space_schema-265636d8826af51aa78d3e44c0945411.json) --- # Plex MetriCal uses what is called a **Plex** as a description of the spatial, temporal, and semantic relationships within your perception system. In short, a Plex is a representation of the physical system that you are calibrating. It is a graph of [***components***](/metrical/core_concepts/components.md) and [***constraints***](/metrical/core_concepts/constraints.md) that fully describe the perception system being optimized by the Tangram Vision Platform. ![Simple Plex](/assets/images/simple_plex-6281af4cca0008de73b82da9debdee45.png) Systems and Plexes have a *one-to-one relationship*. This means that a Plex can only describe one perception system, and a system should be fully described by one Plex. For example, the above plex description could be two cameras: ![A small system](/assets/images/build_plex_system-911b22626b10a87012bd60f45d9265ab.png) ## Why Plex?[​](#why-plex "Direct link to Why Plex?") A common occurrence in the perception industry is that the words *sensor* and *device* are often ambiguous and used interchangeably. This can often lead to confusion, which is the last thing one wants in a complicated perception system. To that end, we don't use these terms at all! "Plex" is sufficient. Here's a quick example: While products like an Intel RealSense could be referred to as a single "device" or "sensor", they're often a combination of several constituent components, such as: * Two infrared cameras * A color camera * An accelerometer and gyroscope (6-DoF IMU) * A "depth" stream, synthesized from observations of the two infrared cameras Each of these components are represented independently in the Plex, with relations denoted separately as well. ![A complex sensor product in Plex form](/assets/images/hifi_plex-c897e4c56d0aafac4592588bf27137ba.png) We can even add second new module to the Plex with very little effort. Just create the components and constraints for the second module! ![Two complex sensor products connected into a single Plex](/assets/images/two_hifi_plex-e9cef737a648f9494bf9d2a52abcd05f.png) This formulation helps prevent ambiguity within MetriCal. For example, one might ask for "the extrinsics between two products". This is not a useful question, since there are many ways to define what that extrinsic may be dependent upon an assumed convention. Instead, with a plex we can ask for the "extrinsics between color camera with UUID X and color camera with UUID Y", which is specific and unambiguous. ![Traversing a single large Plex](/assets/images/plex_traversal-38cafbf0921d4eab12dce033c6de2f86.png) ## How is the Plex used?[​](#how-is-the-plex-used "Direct link to How is the Plex used?") Plexes are used by MetriCal as a representation of your system. For this reason, plexes are both input into MetriCal during calibration, but also generated by MetriCal as a calibration result. By using the plex, we can gauge the following prior to a calibration: * Initial intrinsic values, as well as their prior (co)variances * Initial extrinsic values, as well as their prior (co)variances * The temporal relationships between components in the system, which informs how MetriCal estimates "synchronized" observations (used to infer spatial relationships). * Semantic relationships between components (e.g. a pair of cameras comprising a stereo pair), which can inform MetriCal e.g. which cameras to generate stereo pair metrics between. This makes the plex very flexible, in that it can be used to configure and describe parts of the calibration problem as a function of the spatial, temporal, and semantic relationships present within your system. It is a declarative means to describing how components are related. Likewise, MetriCal will also use the exact same plex format as part of the output to calibration (e.g. when you call `metrical calibrate`). From the output plex, we can gauge the following: * Final calibrated intrinsics values, as well as their posterior covariances * Final calibrated extrinsics values, as well as their posterior covariances The plex is a convenient means to serialize, copy, share, and parse known information about a system. Many of MetriCal's features directly produce or consume a plex. ## Subplexes[​](#subplexes "Direct link to Subplexes") MetriCal always interprets a Plex at face value — we never assume to know your system better than you do. However, this can lead to some awkward situations when trying to profile your system. Here's the same sensor product that we've been looking at with only a few constraints added: ![Complex sensor product as subplex](/assets/images/subplexes-1e607f83b16bf18e86305f19a532914b.png) Instead of one fully connected Plex, we have one Plex made of many *subplexes*. This can be interpreted by MetriCal a few ways: 1. The missing constraints between e.g. color and infrared cameras cannot be known. 2. The missing constraints between e.g. color and infrared cameras are unimportant / should not be calibrated ever. 3. The missing constraints between e.g. color and infrared cameras were not added for some other reason (e.g. they were forgotten, sync was not working, etc.). In any case, this will inform MetriCal's behaviour. Calibration, for instance, will look to fill in all possible constraints between components. As a conseuqence, if two components do not have any possible spatial constraint path between them, then MetriCal will not make attempts to infer a relative extrinsic or spatial constraint between them in the output. ## Plex Conversion[​](#plex-conversion "Direct link to Plex Conversion") While plexes are a convenient format when working with MetriCal, other software may not support Plexes as a format. In these instances, it can be useful to convert parts or the whole plex into a more usable format. Some examples can be: * Converting the intrinsics and distortion parameters for a given camera into a look-up table for fast image rectification. * Converting the intrinsics and distortion parameters for a given stereo pair into a look-up table for fast stereo rectification. * Converting a plex into a [URDF](https://docs.ros.org/en/humble/Tutorials/Intermediate/URDF/URDF-Main.html) for use with ROS / ROS2 tooling. See the [`metrical shape`](/metrical/commands/shape/shape_overview.md) docs for more information on how to extract or convert information from the plex. ## Plex Structure[​](#plex-structure "Direct link to Plex Structure") Don't write a plex by hand! Plex files are complex and can be difficult to write by hand. Instead, use the [Init command](/metrical/commands/init.md)! This command will generate a plex for you based on your system data, and you can modify it as needed afterwards. Plex data is serialized as JSON for the convenience of being able to store and edit the plex in a text-based format. This can bring about some complications: while JSON is a plaintext format that is well-suited to manipulation (and to some extent, reading), it is loosely defined and the internal representation of what is inside a JSON file or object is not always easy to understand. | Field | Type | Description | | ------------------------ | ------------------------------------ | ------------------------------------------------------------------------- | | `"uuid"` | String | A universally unique identifier for the Plex. | | `"creation_timestamp"` | integer | The creation timestamp of the Plex, from unix epoch, in nanoseconds. | | `"components"` | Component Object | An array of JSON objects describing components within the Plex. | | `"spatial_constraints"` | Array of Spatial Constraint Objects | An array of JSON objects describing spatial constraints within the Plex. | | `"temporal_constraints"` | Array of Temporal Constraint Objects | An array of JSON objects describing temporal constraints within the Plex. | | `"semantic_constraints"` | Array of Semantic Constraint Objects | An array of JSON objects describing semantic constraints within the Plex. | ### Full Schema[​](#full-schema "Direct link to Full Schema") Loading .... [Raw schema file](/assets/files/plex_schema-9b6a8c3348f2e153f3cb98a6b6f3b595.json) --- # Projective Compensation Calibration can be described as: > "*A statistical optimization process that aims to solve component model, constraints, and object space collectively.*" This means that we have many parameters to solve for, many different observations, and an expectation of statistical soundness. In an ideal world, all of our observations would be *independent* and *non-correlated*, i.e. one parameter's value wouldn't affect other parameters. However, this is rarely the case. Many of our component's parameters will be correlated with both observations and other parameters. This also means that the *errors* are affected between correlated parameters. When errors in the determination of one parameter cause errors in the determination of a different parameter, we call that **projective compensation**. Errors in the first parameter are being *compensated* for by *projecting* the error into the second. Projective compensation can happen as a result of: 1. Poor choice in model. If a parameter chosen to model your components conflates many other parameters together, then these parameters cannot be optimized in a statistically independent manner. 2. Poor data capture. Because the calibration process reconstructs the "object space" from component observations, the data collection process influences how the calibration parameters are determined through the optimization process. It's hard to directly measure projective compensation in an optimization with the statistical tools we have today. It is possible to use output parameter covariance to indirectly observe when projective compensation happens. Furthermore, depending on what parameters are correlated, we can often inform the modeling or calibration process to reduce this effect. Despite it being possible to detect when projective compensation is occurring (and it is almost always occuring to some degree or another), it is more useful to understand the different results and [metrics](/metrical/results/output_file.md) that MetriCal outputs and how those can be used to signal what kinds of projective compensation may be present. --- # Welcome to MetriCal ![MetriCal banner](/assets/images/banner_metrical-fec4cd5861b6e349e14755aa6bd18254.png) ## Introduction[​](#introduction "Direct link to Introduction") MetriCal delivers accurate, precise, and expedient calibration results for multimodal sensor suites. Its easy-to-use interface and detailed metrics enable enterprise-level autonomy at scale. Anyone can download MetriCal, license not required. Unlicensed users have near-complete use of the software, from processing to metrics generation. The only limitation is the calibration results, which require a license to view and save. To get started, check out the [installation instructions](/metrical/configuration/installation.md). ## What is Sensor Calibration?[​](#what-is-sensor-calibration "Direct link to What is Sensor Calibration?") Sensor calibration is the process of determining the parameters that define how a sensor captures data from the world. This includes: * Intrinsic parameters: Properties internal to the sensor itself (like a camera's focal length or distortion) * Extrinsic parameters: The position and orientation of sensors relative to each other or a reference frame Good calibration ensures that: * Measurements from a single sensor are accurate * Data from multiple sensors can be properly aligned and fused * Perception algorithms receive consistent, accurate input data ## Why MetriCal?[​](#why-metrical "Direct link to Why MetriCal?") MetriCal provides a comprehensive solution for calibrating various sensor configurations. Unlike many calibration tools that focus on single sensors or specific combinations, MetriCal offers: * Support for single cameras, multi-camera arrays, LiDAR sensors, and IMU sensors * Joint optimization of intrinsic and extrinsic parameters * Detailed uncertainty metrics to assess calibration quality * Flexible workflows for different calibration scenarios * Visualization tools to inspect calibration results * Process ROSbags, MCAP files, and folder datasets. * Convert a calibration file into a URDF file for easy integration into ROS. * Use a variety of calibration targets. * Create pixel-wise lookup tables for both single camera correction and stereo pair rectification. ...and so much more! Whether you're building a simple computer vision system or a complex autonomous vehicle perception stack, these guides will help you achieve accurate calibration with MetriCal. ## Sensor Calibration Guides[​](#sensor-calibration-guides "Direct link to Sensor Calibration Guides") Good sensor calibration is key to building accurate perception systems. MetriCal helps you calibrate many types of sensors—from a single camera to complex multi-sensor setups. Our guides take you through each step of the process, from collecting data to checking your results. We cover single cameras, multiple cameras, and camera-LiDAR combinations with clear instructions and practical tips. Choose the guide below that matches your setup to get started. [![Calibration Guide Overview](/assets/images/guide_overview-8670fecb68e03165a9859c0fcbb2fe51.png)](/metrical/calibration_guides/guide_overview.md) ### [Calibration Guide Overview](/metrical/calibration_guides/guide_overview.md) [General guidelines for all calibration types](/metrical/calibration_guides/guide_overview.md) [View Guide →](/metrical/calibration_guides/guide_overview.md) [![Single Camera Calibration](/assets/images/guide_single_camera-84daf5456cf83011e4bb31d0e2a90d02.png)](/metrical/calibration_guides/single_camera_cal.md) ### [Single Camera Calibration](/metrical/calibration_guides/single_camera_cal.md) [Calibrate intrinsics for a single camera](/metrical/calibration_guides/single_camera_cal.md) [View Guide →](/metrical/calibration_guides/single_camera_cal.md) [![Multi-Camera Calibration](/assets/images/guide_multi_camera-5dcd4150997955a8550369ad7ff344b0.png)](/metrical/calibration_guides/multi_camera_cal.md) ### [Multi-Camera Calibration](/metrical/calibration_guides/multi_camera_cal.md) [Calibrate multiple cameras together](/metrical/calibration_guides/multi_camera_cal.md) [View Guide →](/metrical/calibration_guides/multi_camera_cal.md) [![Camera ↔ LiDAR Calibration](/assets/images/guide_camera_lidar-8c819ec541379ef39cd30091d9f2c5da.png)](/metrical/calibration_guides/camera_lidar_cal.md) ### [Camera ↔ LiDAR Calibration](/metrical/calibration_guides/camera_lidar_cal.md) [Calibrate cameras with LiDAR sensors](/metrical/calibration_guides/camera_lidar_cal.md) [View Guide →](/metrical/calibration_guides/camera_lidar_cal.md) [![Camera ↔ IMU Calibration](/assets/images/guide_camera_imu-dc8e90a97865eebc696d151c26d3ae00.png)](/metrical/calibration_guides/camera_imu_cal.md) ### [Camera ↔ IMU Calibration](/metrical/calibration_guides/camera_imu_cal.md) [Calibrate cameras with IMU sensors](/metrical/calibration_guides/camera_imu_cal.md) [View Guide →](/metrical/calibration_guides/camera_imu_cal.md) [![LiDAR ↔ LiDAR Calibration](/assets/images/guide_lidar_lidar-bb8ff0a32d3b043aac0698339fad7445.png)](/metrical/calibration_guides/lidar_lidar_cal.md) ### [LiDAR ↔ LiDAR Calibration](/metrical/calibration_guides/lidar_lidar_cal.md) [Calibrate multiple LiDAR sensors](/metrical/calibration_guides/lidar_lidar_cal.md) [View Guide →](/metrical/calibration_guides/lidar_lidar_cal.md) ## Using an LLM?[​](#using-an-llm "Direct link to Using an LLM?") Point ChatGPT/Claude/Gemini/etc at our [llms.txt](https://docs.tangramvision.com/llms.txt) or [llms-full.txt](https://docs.tangramvision.com/llms-full.txt) files to give the LLM an easy way to ingest MetriCal documentation in markdown format. You can then ask the LLM to find information in the docs, explain concepts and tradeoffs, assist with configuration and troubleshooting, and so on. warning We only provide `llms.txt` content and associated markdown files for the latest stable version of MetriCal! If you need support for older versions of MetriCal or if you'd like to see more support for LLMs, please [contact us](/metrical/support_and_admin/contact.md). --- # MetriCal JSON Output ## Serialized Results and Metrics[​](#serialized-results-and-metrics "Direct link to Serialized Results and Metrics") Every calibration outputs a comprehensive JSON of metrics, by default named `results.json`. This file contains: * The optimized plex representing the calibrated system * The optimized object space (with any updated spatial constraints for a given object space) * Metrics derived over the dataset that was calibrated: * Summary statistics for the adjustment * Optimized object space features (e.g. 3D target or corner locations) * Residual metrics for each observation used in the adjustment ### Optimized Plex[​](#optimized-plex "Direct link to Optimized Plex") The optimized Plex is a description of the now-calibrated System. This Plex is typically more "complete" and information-rich than the input Plex, since it is based off of the real data used to calibrate the System. The optimized plex can be pulled out of the `results.json` by using `jq`: ``` jq .plex results.json > plex.json ``` ### Optimized Object Space[​](#optimized-object-space "Direct link to Optimized Object Space") MetriCal will optimize over the object spaces used in every calibration. For example, If your object space consists of a checkerboard, MetriCal will directly estimate how flat (or not) the checkerboard actually is using the calibration data. This comes in two forms in the `results.json` file: 1. An optimized object space definition that can be re-used in future calls to `metrical calibrate`. 2. A collection of optimized object space features (i.e. the actual 3D feature or point data) optimized using the calibration data. The former can be extracted from the `results.json` by using `jq`: ``` jq .object_space results.json > object_space.json ``` Conversely, the latter is embedded in the metrics themselves: ``` jq .metrics.optimized_object_spaces results.json > optimized_object_spaces.json ``` The latter is interesting insofar as it can be plotted in 3D to visually see how object features such as targets or the positions of corner points were estimated: ``` { "1c22b1c6-4d5a-4058-a71d-c9716a099d48": { "ids": [40, 53, 34, 3, 43], "xs": [0.3838, 0.7679, 0.6717, 0.2883, 0.6717], "ys": [0.1917, 0.096, 0.2879, 0.5761, 0.192], "zs": [0.00245, -0.00159, 0.00122, -0.00155, 0.00085] } } ``` The optimized object space features are in a JSON object where * the keys are UUIDs for each object space * the values are an object containing the feature identifiers (`ids`), as well as Cartesian coordinate data (`xs`, `ys`, `zs`) for each feature ### Summary Statistics[​](#summary-statistics "Direct link to Summary Statistics") The main entrypoint into the metrics contained in the `results.json` is the collection of summary statistics. Of all the metrics output in a `results.json` file, the Summary Statistics for a calibration run the most risk of being misinterpreted. Always bear in mind that these figures represent broad, global mathematical strokes, and should be interpreted holistically along with the rest of the metrics of a calibration. These summary statistics can be extracted from the metrics using `jq`: ``` jq .metrics.summary_statistics results.json > summary_statistics.json ``` These are the same summary statistics that are output in the [console logs](/metrical/results/report.md#summary-statistics-charts-ss). ### Residual Metrics[​](#residual-metrics "Direct link to Residual Metrics") Residual metrics are generated for each and every cost or observation added to the calibration. The most immediately familiar residual metric might be reprojection error, but similar metrics can be derived for other modalities and observations as well. A full list of these is linked below: | Metric Type | Produced by | | --------------------------------------------------------------------------------------------------------- | -------------------------- | | [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md) | All Camera-LiDAR pairs | | [Composed Relative Extrinsics Error](/metrical/results/residual_metrics/composed_relative_extrinsics.md) | All Components and Objects | | [Image Reprojection](/metrical/results/residual_metrics/image_reprojection.md) | All Cameras | | [IMU Preintegration Error](/metrical/results/residual_metrics/imu_preintegration_error.md) | All IMUs | | [Interior Points to Plane Error](/metrical/results/residual_metrics/interior_points_to_plane_error.md) | All Camera-LiDAR pairs | | [Object Inertial Extrinsics Error](/metrical/results/residual_metrics/object_inertial_extrinsic_error.md) | All IMUs | | [Paired 3D Point Error](/metrical/results/residual_metrics/paired_3d_point_error.md) | All LiDAR-LiDAR pairs | | [Paired Plane Normal Error](/metrical/results/residual_metrics/paired_plane_normal_error.md) | All LiDAR-LiDAR pairs | ### Full Schema[​](#full-schema "Direct link to Full Schema") Loading .... [Raw schema file](/assets/files/metrical_results_schema-25a30d920c62b85f15ec319975aa6c02.json) --- # MetriCal Reports When running a calibration with MetriCal, successful runs generate a comprehensive set of charts and diagnostics to help you understand your calibration quality. These are output to your command line interface as full report detailing every run. This documentation explains each report section, what metrics they display, and how to interpret figures to improve your calibration workflow. ## Generating Reports[​](#generating-reports "Direct link to Generating Reports") Reports can be saved to an HTML file for later inspection. This is useful for sharing results with team members or for archiving results in a human-readable format. ### Calibrate Mode[​](#calibrate-mode "Direct link to Calibrate Mode") Use the `--report-path` argument when running MetriCal's [Calibrate command](/metrical/commands/calibrate.md) to save the CLI report directly to an HTML file. ``` metrical calibrate --report-path "report.html" ... ``` ### Report Mode[​](#report-mode "Direct link to Report Mode") Use [Report command](/metrical/commands/report.md) to generate a report from a plex or the [output file](/metrical/results/output_file.md) of a previous calibration. ``` metrical report [OPTIONS] ``` ## Color Coding in MetriCal Output[​](#color-coding-in-metrical-output "Direct link to Color Coding in MetriCal Output") MetriCal uses ANSI terminal codes for colorizing output according to an internal assessment of metric quality: * █ **Cyan** : Spectacular (excellent calibration quality) * █ **Green**: Good (solid calibration quality) * █ **Orange**: Okay, but generally poor (may need improvement) * █ **Red**: Bad (likely needs attention) Note that this quality assessment is based on experience with a variety of datasets and may not accurately reflect your specific calibration needs. Use these colors as a general guide, but always consider the context of your calibration setup and goals. ## Chart Sections Overview[​](#chart-sections-overview "Direct link to Chart Sections Overview") MetriCal organizes outputs into six main sections: 1. **Data Inputs** (`DI-*` prefix) - Information about your input data 2. **Camera Modeling** (`CM-*` prefix) - Charts showing how well the camera models fit the data 3. **Extrinsics Info** (`EI-*` prefix) - Metrics on the spatial relationships between components 4. **Calibrated Plex** (`CP-*` prefix) - Results of your calibration 5. **Summary Stats** (`SS-*` prefix) - Overall performance metrics 6. **Data Diagnostics** - Warnings and advice about your calibration dataset Let's explore each chart in detail. ## Data Inputs Charts (DI)[​](#data-inputs-charts-di "Direct link to Data Inputs Charts (DI)") These charts provide information about the data you provided to MetriCal. ### DI-1: Calibration Inputs[​](#di-1-calibration-inputs "Direct link to DI-1: Calibration Inputs") [MetriCal Output - di\_1](/_pre_gen_html/di_1.html) This table displays the basic configuration settings used for your calibration run: * **MetriCal Version**: The version of MetriCal used for the calibration * **Optimization Profile**: The optimization strategy used (Standard, Performance, or Minimize-Error) * **Motion Thresholds**: Camera and LiDAR motion thresholds used (or "Disabled" if motion filter was turned off) * **Preserve Input Constraints**: Whether input constraints were preserved * **Object Relative Extrinsics Inference**: Whether ORE inference was enabled ### DI-2: Object Space Descriptions[​](#di-2-object-space-descriptions "Direct link to DI-2: Object Space Descriptions") [MetriCal Output - di\_2](/_pre_gen_html/di_2.html) This table describes the calibration targets (object spaces) used in your dataset: * **Type**: The type of target object (e.g., DotMarkers, Charuco, etc.) * **UUID**: The unique identifier of the object * **Detector**: The detector used (e.g., "Dictionary: Aruco7x7\_50") and its description ### DI-3: Processed Observation Count[​](#di-3-processed-observation-count "Direct link to DI-3: Processed Observation Count") [MetriCal Output - di\_3](/_pre_gen_html/di_3.html) This critical table shows how many observations were processed from your dataset: * **Component**: The sensor component name and UUID * **# read**: Total number of observations read from the dataset * **# with detections**: Number of observations where the detector identified features * **# after quality filter**: Number of detections that passed the quality filter * **# after motion filter**: Number of detections that passed the motion filter If there's a significant drop between any of these columns, it may indicate an issue with your dataset or settings. This will be flagged in the diagnostics section. ### DI-4: Camera FOV Coverage[​](#di-4-camera-fov-coverage "Direct link to DI-4: Camera FOV Coverage") [MetriCal Output - di\_4](/_pre_gen_html/di_4.html) This visual chart shows how well your calibration data covers the field of view (FOV) of each camera: * The chart divides each camera's image into a 10x10 grid * Each grid cell shows how many features were detected in that region * Color coding indicates feature density: * █ **Red**: No features detected * █ **Orange**: 1-15 features detected * █ **Green**: 16-50 features detected * █ **Cyan**: >50 features detected Ideally, you want to see a mostly green/cyan grid with minimal red cells, indicating good coverage across the entire FOV. Poor coverage can lead to inaccurate intrinsics calibration. ### DI-5: Detection Timeline[​](#di-5-detection-timeline "Direct link to DI-5: Detection Timeline") [MetriCal Output - di\_5](/_pre_gen_html/di_5.html) This chart visualizes when detections occurred across your dataset timeline: * X-axis represents time in seconds since the first observation * Each row represents a different sensor component * Points indicate timestamps when features were detected * Components are color-coded for easy differentiation This helps you visualize how synchronized your sensor data is and identify any gaps in observations. If you expect all of your observations to align nicely, but they aren't aligned at all, it's a sign that your timestamps are not being written or read correctly. Either way, this table is a good place to start debugging. ## Camera Modeling Charts (CM)[​](#camera-modeling-charts-cm "Direct link to Camera Modeling Charts (CM)") These charts show how well the selected camera models fit your calibration data. ### CM-1: Binned Reprojection Errors[​](#cm-1-binned-reprojection-errors "Direct link to CM-1: Binned Reprojection Errors") [MetriCal Output - cm\_1](/_pre_gen_html/cm_1.html) This heatmap visualizes reprojection errors across the camera's field of view: * The chart shows a 10x10 grid representing the camera FOV * Each cell contains the weighted RMSE (Root Mean Square Error) of reprojection errors in that region * Color coding indicates error magnitude: * █ **Cyan**: < 0.1px error * █ **Green**: 0.1-0.25px error * █ **Orange**: 0.25-1.0px error * █ **Red**: > 1px error or no data Ideally, most cells should be cyan or green. Areas with consistently higher errors (orange/red) may indicate issues with your camera model or lens distortion that isn't being captured correctly. ### CM-2: Stereo Pair Rectification Error[​](#cm-2-stereo-pair-rectification-error "Direct link to CM-2: Stereo Pair Rectification Error") [MetriCal Output - cm\_2](/_pre_gen_html/cm_2.html) For multi-camera setups, this chart shows the stereo rectification error between camera pairs. Any cameras that saw the same targets at the same time are added to a stereo pair. * Lists each camera pair combination * Shows various error metrics for stereo rectification * Indicates how well the extrinsic calibration aligns the two cameras Lower values indicate better stereo calibration. ## Extrinsics Info Charts (EI)[​](#extrinsics-info-charts-ei "Direct link to Extrinsics Info Charts (EI)") These charts provide metrics on the spatial relationships between your calibrated components. ### EI-1: Component Extrinsics Errors[​](#ei-1-component-extrinsics-errors "Direct link to EI-1: Component Extrinsics Errors") [MetriCal Output - ei\_1](/_pre_gen_html/ei_1.html) This is a complete summary of all component extrinsics errors (as RMSE) between each pair of components, as described by the [Composed Relative Extrinsics metrics](/metrical/results/residual_metrics/composed_relative_extrinsics.md). This table is probably one of the most useful when evaluating the quality of a plex's extrinsics calibration. Note that the extrinsics errors are weighted, which means outliers are taken into account. Lower values indicate more precise extrinsic calibration between components. Rotations are printed as Euler angles, using Extrinsic XYZ convention. * **X, Y, Z (m)**: Translation errors in meters * **Roll, Pitch, Yaw (°)**: Rotation errors in degrees ### EI-2: IMU Preintegration Errors[​](#ei-2-imu-preintegration-errors "Direct link to EI-2: IMU Preintegration Errors") [MetriCal Output - ei\_2](/_pre_gen_html/ei_2.html) This is a complete summary of all [IMU Preintegration errors](/metrical/results/residual_metrics/imu_preintegration_error.md) from the system. Notice that IMU preintegration error is with respect to an object space, not a component. The inertial frame of a system is tied to a landmark in space, so it makes sense that an IMU's error would be tied to a target. Rotations are printed as Euler angles, using Extrinsic XYZ convention. ### EI-3: Observed Camera Range of Motion[​](#ei-3-observed-camera-range-of-motion "Direct link to EI-3: Observed Camera Range of Motion") [MetriCal Output - ei\_3](/_pre_gen_html/ei_3.html) This critical table shows how much motion was observed for each camera during data collection: * **Z (m)**: Range of depth (distance variation) observed * **Horizontal angle (°)**: Range of horizontal motion observed * **Vertical angle (°)**: Range of vertical motion observed For accurate calibration, you typically want to see: * Z range > 1m * Horizontal angle > 60° * Vertical angle > 60° Insufficient range of motion is a common reason for calibration issues, often leading to projective compensation errors. ## Calibrated Plex Charts (CP)[​](#calibrated-plex-charts-cp "Direct link to Calibrated Plex Charts (CP)") These tables display the actual calibration results for your sensor system. ### CP-1: Camera Metrics[​](#cp-1-camera-metrics "Direct link to CP-1: Camera Metrics") [MetriCal Output - cp\_1](/_pre_gen_html/cp_1.html) This table shows the calibrated intrinsic parameters for each camera. Different models will have different interpretations; see the [Camera Models](/metrical/calibration_models/cameras.md) page for more. * **Specs**: Basic camera specifications (width, height, pixel pitch) * **Projection Model**: Calibrated projection parameters (focal length, principal point) * **Distortion Model**: Calibrated distortion parameters (varies by model type) The standard deviations (±) indicate the uncertainty of each parameter. ### CP-2: Optimized IMU Metrics[​](#cp-2-optimized-imu-metrics "Direct link to CP-2: Optimized IMU Metrics") [MetriCal Output - cp\_2](/_pre_gen_html/cp_2.html) This table presents all IMU metrics derived for every IMU component in a calibration run. The most interesting column for most users is the Intrinsics: scale, shear, rotation, and g sensitivity. ### CP-3: Calibrated Extrinsics[​](#cp-3-calibrated-extrinsics "Direct link to CP-3: Calibrated Extrinsics") [MetriCal Output - cp\_3](/_pre_gen_html/cp_3.html) This table represents the [Minimum Spanning Tree](/metrical/commands/shape/shape_mst.md) of all spatial constraints in the Plex. Note that this table doesn't print *all* spatial constraints in the plex; it just takes the "best" constraints possible that would still preserve the structure. Rotations are printed as Euler angles, using Extrinsic XYZ convention. * **Translation (m)**: The X, Y, Z position of each component relative to the origin * **Diff from input (mm)**: How much the calibration changed from the initial values * **Rotation (°)**: The Roll, Pitch, Yaw rotation of each component in degrees * **Diff from input (°)**: How much the rotation changed from the initial values The table also indicates which "subplex" each component belongs to (components that share spatial relationships). ## Summary Statistics Charts (SS)[​](#summary-statistics-charts-ss "Direct link to Summary Statistics Charts (SS)") These charts provide overall metrics on calibration quality. ### SS-1: Optimization Summary Statistics[​](#ss-1-optimization-summary-statistics "Direct link to SS-1: Optimization Summary Statistics") [MetriCal Output - ss\_1](/_pre_gen_html/ss_1.html) This table provides high-level metrics about the optimization process: * **Optimized Object RMSE**: The overall reprojection error across all cameras * **Posterior Variance**: A statistical measure of the calibration uncertainty Lower values indicate a more accurate calibration. ### SS-2: Camera Summary Statistics[​](#ss-2-camera-summary-statistics "Direct link to SS-2: Camera Summary Statistics") [MetriCal Output - ss\_2](/_pre_gen_html/ss_2.html) This table summarizes reprojection errors for each camera. Typically, values under 0.5px indicate good calibration, with values under 0.2px being excellent. However, this can vary based on your camera image resolution or camera type. Comparing Camera RMSE If two cameras have pixels of different sizes, then it is important to first convert these RMSEs to some metric size so as to compare them equally. This is what `pixel_pitch` in the [Plex API](/metrical/core_concepts/components.md#camera) is for: cameras can be compared more equally with that in mind, as the pixel size between two cameras is not always equal! ### SS-3: LiDAR Summary Statistics[​](#ss-3-lidar-summary-statistics "Direct link to SS-3: LiDAR Summary Statistics") [MetriCal Output - ss\_3](/_pre_gen_html/ss_3.html) The LiDAR Summary Statistics show the Root Mean Square Error (RMSE) of four different types of residual metrics: * [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md), if a camera-lidar pair is co-visible with a lidar circle target. * [Interior Points to Plane Error](/metrical/results/residual_metrics/interior_points_to_plane_error.md), if a camera-lidar pair is co-visible with a lidar circle target. * [Paired 3D Point Error](/metrical/results/residual_metrics/paired_3d_point_error.md), if a lidar-lidar pair is co-visible with a lidar circle target. * [Paired Plane Normal Error](/metrical/results/residual_metrics/paired_plane_normal_error.md), if co-visible LiDAR are present For a component that has been appropriately modeled (i.e. there are no un-modeled systematic error sources present), this represents the mean quantity of error from observations taken by a single component. Comparing LiDAR RMSE Two LiDAR calibrated simultaneously will have the same RMSE relative to one another. This makes intuitive sense: LiDAR A will have a certain relative error to LiDAR B, but LiDAR B will have that same relative error when compared to LiDAR A. Make sure to take this into account when comparing LiDAR RMSE more generally. ## Data Diagnostics[​](#data-diagnostics "Direct link to Data Diagnostics") [MetriCal Output - data\_diagnostics](/_pre_gen_html/data_diagnostics.html) MetriCal performs comprehensive analysis of your calibration data and provides diagnostics to help identify potential issues. These diagnostics are categorized by severity level: ### █ High-Risk Diagnostics[​](#-high-risk-diagnostics "Direct link to -high-risk-diagnostics") Critical issues that likely need to be addressed for reliable calibration: * **Poor Camera Range of Motion**: Camera movement range insufficient for calibration * **No Component to Register Against**: Missing required component for calibration * **Component Missing Compatible Component Type**: Compatible component exists but has no detections * **Component Missing Compatible Component Type Detections**: All detections filtered from compatible component * **Motion Filter filtered all component observations**: Motion filter removed all observations * **Not Enough Mutual Observations**: Camera pair lacks sufficient mutual observations * **Component Shares No Sync Groups**: Component has no timestamp overlap with others * **Object Has Large Variances**: Object has excessive variance (> 1e-6) ### █ Medium-Risk Diagnostics[​](#-medium-risk-diagnostics "Direct link to -medium-risk-diagnostics") Issues that should be addressed but may not prevent successful calibration: * **Poor Camera Feature Coverage**: Poor feature coverage in camera FOV (< 75%) * **Two or More Spatial Subplexes**: Multiple unrelated spatial groups detected * **More Mutual Observations?**: Camera pair could benefit from more mutual observations ### █ Low-Risk Diagnostics[​](#-low-risk-diagnostics "Direct link to -low-risk-diagnostics") Advice that might help improve calibration quality: * **Component Has Many Low Quality Detections**: High proportion of detections discarded due to quality issues ## Output Summary[​](#output-summary "Direct link to Output Summary") [MetriCal Output - output](/_pre_gen_html/output.html) This is a summary of the output files generated by MetriCal, or instructions on how to access them. ## Conclusion[​](#conclusion "Direct link to Conclusion") Interpreting MetriCal output charts is key to understanding your calibration quality and identifying areas for improvement. By systematically analyzing each section, you can iteratively improve your calibration process to achieve more accurate results. Remember that calibration is both an art and a science—experimental design matters greatly, and the metrics provided by MetriCal help you quantify the quality of your calibration data and results. --- # Circle Misalignment Created by: All Camera-LiDAR pairs ## Overview[​](#overview "Direct link to Overview") Circle misalignment is a metric unique to MetriCal. It's an artifact of the way MetriCal bridges the two distinct modalities of camera (primarily 2D features and projection) and LiDAR (3D point clouds in Euclidean space). The design of the circular target is the key. The target starts as a regular camera target like a ChArUco board. That board is then sized into a circle of a known diameter and outlined with retroreflective material, like tape. This design allows both cameras and LiDAR to pick up the target via its sensing modality. ![Circle Board](/assets/images/circle_plane_construction-bc071c3a36cdcef10afa5503722bffaa.png) There is one circle misalignment metric group for every circular target in use. ## Definition[​](#definition "Direct link to Definition") Circle misalignment metrics contain the following fields: | Field | Type | Description | | -------------------------------- | ------------------------------------- | -------------------------------------------------------------------------------------------------- | | `metadata` | A common metadata object | The metadata associated with the point cloud this circle target was measured in. | | `object_space_id` | UUID | The UUID of the object space that was being observed. | | `measured_circle_center` | An array of 3 float values | The X/Y/Z coordinate of the center of the circle target, in the LiDAR coordinate frame | | `world_extrinsics_component_ids` | An array of UUIDs | The camera UUIDs for each world extrinsic in `world_extrinsics` | | `world_extrinsics` | An array of world extrinsics objects | The world pose (camera from object space) that correspond to each `circle_center_misalignment` | | `circle_center_misalignment` | An array of circle center coordinates | The errors between the circle center location estimated between each camera and the observed LiDAR | | `circle_center_rmse` | Float | The circle center misalignment RMSE over all world extrinsics. | ## Analysis[​](#analysis "Direct link to Analysis") ### The "Center"[​](#the-center "Direct link to The \"Center\"") Much of the circle misalignment metrics are about bridging the gap between the two modalities. * The center of the circle in *camera space* is the center of the ChArUco board, given its metric dimensions * The center of the circle in *LiDAR space* is the centroid of the planar 2D circle fit from the points detected on the ring of retro-reflective tape. ![Circle Inliers](/assets/images/circle_plane_inliers-ceda93116879605e71c4c7c6bd1c0d5c.png) The `measured_circle_center` above is this LiDAR center; the `circle_center_misalignment` is the error between that LiDAR circle center and the circle center estimated from each camera. This might seem straightforward, but there's a bit more to it than that. Since there are no commonly observable features between cameras and LiDAR, MetriCal has to use a bit of math to make calibration work. Think of the object circle center as our origin; we'll call this CO. The LiDAR circle center is that same point, but in the LiDAR coordinate frame: CL=ΓOL​⋅CO ...and every camera has its own estimate of the circle center w\.r.t the camera board, CC: CC=ΓOC​⋅CO We can relate these centers to one another by using the extrinsics between LiDAR and Camera, ΓCL​: C^L=ΓCL​⋅CC=ΓCL​⋅ΓOC​⋅CO With both CL and C^L in the LiDAR coordinate frame, we can calculate the error between the two and get our `circle_center_misalignment`: ccm=CL−C^L ΓOC​ is what is referred to when we say `world_extrinsics`, and the `world_extrinsics_component_ids` designate what camera that extrinsic relates to. MetriCal calculates these for every pair of synced camera-LiDAR observations. --- # Composed Relative Extrinsics Created by: All Components and Objects ## Overview[​](#overview "Direct link to Overview") These metrics refer to the error between relative extrinsics measurements (that are composed between components and objects) and the current estimated extrinsic. What does this mean? * Components A and B have a *relative extrinsic* formed by object O represented by ΓOA​⋅ΓBO​. * The *current estimated extrinsic* between A and B is just the transform between the two components, ΓBA​. Ideally, these two should be the same: ΓBA​≈ΓOA​⋅ΓBO​ ...but nothing's perfect in optimization. The error between these two is what we capture in the Composed Relative Extrinsics. Since MetriCal optimizes for both components and objects, both components and objects have relative constraints. When the `kind` is a *component* relative extrinsic, each `common_uuid` will refer to object spaces. Correspondingly, when the `kind` is an *object* relative extrinsic, `common_uuid` will refer to components. ![Composed Relative Extrinsics](/assets/images/composed_relative_extrinsics-87acd49525ae70d8b905a8c13747bdde.png) ## Description[​](#description "Direct link to Description") Composed relative extrinsics metrics contain the following fields: | Field | Type | Description | | ------------------------ | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------- | | `kind` | String | The kind of relative extrinsic (either "Component" or "Object"). | | `from` | UUID | The UUID of the "from" coordinate frame. | | `to` | UUID | The UUID of the "to" coordinate frame. | | `extrinsics_differences` | An array of extrinsics objects | The differences from a unit extrinsic when subtracting the composed world extrinsics from the estimated component or object extrinsic. | | `common_uuids` | An array of UUIDs | The "common" UUIDs that link a component relative extrinsic or object relative extrinsic. | --- # Image Reprojection Created by: Cameras ## Overview[​](#overview "Direct link to Overview") Reprojection error is the error in the position of a feature in an image, as compared to the position of its corresponding feature in object space. More simply put, it tells you how well the camera model generalizes in the real world. Reprojection is often seen as the only error metric to measure *precision* within an adjustment. Reprojection errors can tell us a lot about the calibration process and provide insight into what image effects were (or were not) properly calibrated for. ## Definition[​](#definition "Direct link to Definition") Image reprojection metrics contain the following fields: | Field | Type | Description | | ----------------- | ------------------------------ | -------------------------------------------------------------------------------------------------- | | `metadata` | A common image metadata object | The metadata associated with the image that this reprojection data was constructed from. | | `object_space_id` | UUID | The UUID of the object space that was observed by the image this reprojection data corresponds to. | | `ids` | An array of integers | The identifiers of the object space features detected in this image. | | `us` | An array of floats | The u-coordinates for each object space feature detected in this image. | | `vs` | An array of floats | The v-coordinates for each object space feature detected in this image. | | `rs` | An array of floats | The radial polar coordinates for each object space feature detected in this image. | | `ts` | An array of floats | The tangential polar coordinates for each object space feature detected in this image. | | `dus` | An array of floats | The error in u-coordinates for each object space feature detected in this image. | | `dvs` | An array of floats | The error in v-coordinates for each object space feature detected in this image. | | `drs` | An array of floats | The error in radial polar coordinates for each object space feature detected in this image. | | `dts` | An array of floats | The error in tangential polar coordinates for each object space feature detected in this image. | | `world_extrinsic` | An extrinsics object | The pose of the camera (camera from object space) for this image. | | `object_xs` | An array of floats | The object space (3D) X-coordinates for each object space feature detected in this image. | | `object_ys` | An array of floats | The object space (3D) Y-coordinates for each object space feature detected in this image. | | `object_zs` | An array of floats | The object space (3D) Z-coordinates for each object space feature detected in this image. | | `rmse` | Float | Root mean square error of the image residuals, in pixels. | ## Camera Coordinate Frames[​](#camera-coordinate-frames "Direct link to Camera Coordinate Frames") MetriCal uses two different conventions for image space points. Both can and should be used when analyzing camera calibration statistics. ### CV (Computer Vision) Coordinate Frame[​](#cv-computer-vision-coordinate-frame "Direct link to CV (Computer Vision) Coordinate Frame") ![xy image space](/assets/images/xy_image_space-f34ff71462d66755a8394839a4346e90.png) This is the standard coordinate system used in computer vision. The origin of the coordinate system is (x0​,y0​), and is located in the upper left corner of the image. When working in this coordinate frame, we use lower-case x and y to denote that these coordinates are in the image, whereas upper-case X, Y, and Z are used to denote coordinates of a point in object space. This coordinate frame is useful when examining feature space, or building histograms across an image. ### UV Coordinate Frame[​](#uv-coordinate-frame "Direct link to UV Coordinate Frame") ![uv image space](/assets/images/uv_image_space-dceffca7fc9b9aa2c993c36b2f3bf244.png) This coordinate frame maintains axes conventions, but instead places the origin at the principal point of the image, labeled as (u0​,v0​). Notice that the coordinate dimensions are referred to as lower-case u and v, to denote that axes are in image space and relative to the principal point. Most charts that deal with reprojection error convey more information when plotted in UV coordinates than CV coordinates. For instance, radial distortion increases proportional to radial distance from the principal point — not the top-left corner of the image. ### Cartesian vs. Polar[​](#cartesian-vs-polar "Direct link to Cartesian vs. Polar") In addition to understanding the different origins of the two coordinate frames, polar coordinates are sometimes used in order to be able to visualize reprojections as a function of radial or tangential differences. When using polar coordinates, points are likewise centered about the principal point. Thus, we go from our previous (u,v) frame to (r,t). | Cartesian | Polar | | --------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- | | ![cartesian point](/assets/images/cartesian_point-7ab5e76641187c7e767d84a8b7eebeff.png) | ![polar point](/assets/images/polar_point-a07357efa392f10b08da1f1141a06494.png) | ## Analysis[​](#analysis "Direct link to Analysis") Below are some useful reprojection metrics and trends that can be derived from the numbers found in `results.json`. ### Feature Coverage Analysis[​](#feature-coverage-analysis "Direct link to Feature Coverage Analysis") **Data Based In**: Either CV or UV Coordinate Frame All image space observations made from a single camera component over the entire calibration process are plotted. This gives us a sense of data coverage over the domain of the image. For a camera calibration process, this chart should ideally have an isometric distribution of points within the image without any large empty spaces. This even spread prevents a camera model from *overfitting* on any one area. ![feature coverage analysis](/assets/images/image_coverage_analysis-36b667f749a0de6d9fb732a15f620d6b.png) In the above example, there are some empty spaces near the periphery of the image. This can happen due to image vignetting (during the capture process), or just merely because one did not move the target to have coverage in that part of the scene during data capture. ### Radial Error - δr vs. r[​](#radial-error---δr-vs-r "Direct link to Radial Error - δr vs. r") **Data Based In**: UV Coordinate Frame The δr vs. r graph is a graph that plots radial reprojection error as a function of radial distance from the principal point. This graph is an excellent way to characterize distortion error, particularly radial distortions. | Expected | Poor Result | | -------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | ![good radial distortion modeling](/assets/images/good_radial_distortion-8db02b56616ff4f5d8210fe748d9f047.png) | ![bad radial distortion modeling](/assets/images/bad_radial_distortion-e661240dc57b7a6bd32d90a88f12b30b.png) | Consider the graph above: This distribution represents a fully calibrated system that has modeled distortion using the Brown-Conrady model. The error is fairly evenly distributed and low, even as one moves away from the principal point of the image. However, were MetriCal configured to *not* to calibrate for a distortion model (e.g. a plex was generated with `metrical init` such that it used the `no_distortion` model), the output would look very different (see the right figure above). Radial error fluctuates in a sinusoidal pattern now, getting worse as we move away from the principal point. Clearly, this camera needs a distortion model of some kind in future calibrations. ### Tangential Error - δt vs. t[​](#tangential-error---δt-vs-t "Direct link to Tangential Error - δt vs. t") **Data Based In**: UV Coordinate Frame Like the δr vs. r graph, δt vs. t plots the tangential reprojection error as a function of the tangential (angular) component of the data about the principal point. This can be a useful plot to determine if any unmodeled tangential (de-centering) distortion exists. The chart below shows an adjustment with tangential distortion correctly calibrated and accounted for. The "Poor Result" shows the same adjustment without tangential distortion modeling applied. | Expected | Poor Result | | ---------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | ![good tangential distortion modeling](/assets/images/good_tangential_distortion-1990528a484d5fd614e456caafcf1602.png) | ![bad tangential distortion modeling](/assets/images/bad_tangential_distortion-808aa3327327752810c754cb96e65d83.png) | ### Error in u and v[​](#error-in-u-and-v "Direct link to Error in u and v") **Data Based In**: UV Coordinate Frame These plot the error in our Cartesian axes (δu or δv) as a function of the distance along that axis (u or v). Both of these graphs should have their y-axes centered around zero, and should mostly look uniform in nature. The errors at the extreme edges may be larger or more sparse; however, the errors should not have any noticeable trend. | Expected δu vs. u | Expected δv vs. v | | --------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | | ![good du vs u modeling](/assets/images/good_u_residual-23d800d7ce8783e293e2bb0c1d8610fb.png) | ![good dv vs v modeling](/assets/images/good_v_residual-c89b12b6562f990d2017e45531eb950d.png) | ## Unmodeled Intrinsics Indicators[​](#unmodeled-intrinsics-indicators "Direct link to Unmodeled Intrinsics Indicators") There are certain trends and patterns to look out for in the plots above. Many of these can reveal unmodeled intrinsics within a system, like a distortion that wasn't taken into account in this calibration process. A few of these patterns are outlined below. ### Unmodeled Tangential Distortion[​](#unmodeled-tangential-distortion "Direct link to Unmodeled Tangential Distortion") It was obvious that something was amiss when looking at the Poor Result plot for the δt vs. t graph, above. However, depending on the magnitude of error, we may suspect that any effects we see in such a graph are noise. If we then look at the δu vs. u and δv vs. v graphs, we might see the following trends as well: | Unmodeled Tangential - δu vs. u | Unmodeled Tangential - δv vs. v | | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | | ![bad du vs u with tangential distortion](/assets/images/bad_u_residual-852255614513c4bcaf3dad4c9bd91a1e.png) | ![bad dv vs v with tangential distortion](/assets/images/bad_v_residual-19c40620826219907a0a04059043bce8.png) | ## Comparative Analysis[​](#comparative-analysis "Direct link to Comparative Analysis") Beyond the above analyses, the charts provided are incredibly useful when comparing calibrations for the same component over time. Comparing these charts temporally is useful when designing a calibration process, and likewise can be useful in deciding between different models (e.g. Brown-Conrady vs. Kannala-Brandt distortion, etc.). Comparing these charts across components can be helpful if the components are similar (e.g. similar cameras from the same manufacturer). There are some caveats; for example, one cannot compare these charts across two cameras if they have different pixel pitches. Pixel errors from a camera that has 10µm pixels cannot be directly compared to pixel errors from a camera that has 5µm pixels, as the former is 2 times larger than the latter. One might understandably see that the former component has reprojection errors 2 times smaller than the latter, but this would be a false distinction — that difference is due to the difference in size between the pixels of both cameras and not due to some quality of the calibration. ## Pose Data[​](#pose-data "Direct link to Pose Data") The `world_extrinsic` from the image reprojection data represents the pose (camera from object space) of the camera when a given image was taken. Knowing which image is which by uniquely identifying it by its component UUID and timestamp (present in the metadata), these poses can be plotted or used to determine the position of the camera when a given image was taken. This is one way to perform a hand-eye calibration with MetriCal, by extracting the world pose of each image given a timestamp or sequence number. ## Object XYZs[​](#object-xyzs "Direct link to Object XYZs") In addition to image reprojections, the final "unprojected" 3D coordinates of the object space features are also included in image reprojection metrics. This can be used in conjunction with other modality information to determine how an individual image contributed to errors in the final optimized object space. While MetriCal does not have any means to single out and filter an individual image, this can help during the development of a calibration process to ascertain if certain poses or geometries negatively contribute towards the calibration. --- # IMU PreIntegration Error Created by: IMU ## Overview[​](#overview "Direct link to Overview") IMU preintegration error metrics contains all the relevant information to compute a preintegration cost based on a series of navigation states and IMU measurements. In MetriCal, an IMU's *navigation states* represent key points in time where the IMU's position and velocity can be temporally related to another component, e.g. a camera. MetriCal uses these moments to define a *preintegration window* for the IMU, which in turn will produce a *local increment*, also known as a preintegrated measurement. See the [Analysis](#analysis) section for a more detailed explanation of preintegration. ![IMU Preintegration](/assets/images/imu_preintegration-770a17e7c3075dd60382ef0d76545c3f.png) ## Definition[​](#definition "Direct link to Definition") IMU preintegration error metrics contain the following fields: | Field | Type | Description | | ---------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | | `nav_component_id` | UUID | The UUID of the IMU component this metric was generated from. | | `initial_gyro_bias` | An array of 3 float values | The XYZ components denoting the initial gyro bias (units of radians / second) | | `initial_accelerometer_bias` | An array of 3 float values | The XYZ components denoting the initial accelerometer bias (units of meters / seconds2) | | `start_navigation_states` | An array of extrinsics-velocity objects | The inferred starting navigation states. | | `end_navigation_states` | An array of extrinsics-velocity objects | The inferred ending navigation states. | | `local_increments` | An array of extrinsics-velocity objects | The local increments of the preintegration before bias correction. | | `misalignments` | An array of extrinsics-velocity objects | The residual errors of the preintegration. The misalignment between the preintegration and the inferred preintegration based on the navigation states. | | `preintegration_times` | An array of floats | The preintegration horizon. The change in time between the start and end navigation states. | | `gyro_biases` | An array of arrays of 3 float values | An array of the inferred XYZ gyro biases of the preintegration. | | `accelerometer_biases` | An array of arrays of 3 float values | An array of the inferred XYZ accelerometer biases of the preintegration. | ## Analysis[​](#analysis "Direct link to Analysis") ### Navigation States and Preintegration[​](#navigation-states-and-preintegration "Direct link to Navigation States and Preintegration") In order to understand preintegration, it's helpful to understand what would happen without it. In a naive IMU integration, every new IMU measurement is integrated back into the same starting frame to produce a new navigation state. This is called *IMU mechanization*. ![Global increment](/assets/images/global_increment-3a2bf9f5441e828f62c36636b69c6c0c.png) However, this approach has some serious problems. For one, any measurements out-of-sequence will make the computation more difficult, since every measurement state relies on the one before it. Moreover, IMU measurements will only help us *propagate* our navigation state estimate, not correct it. In such mechanized motion models, every subsequent navigation state becomes more uncertain if we don't have any auxiliary information to correct the state. Instead, MetriCal uses *preintegration* to solve these problems. Preintegration is the process of reorganizing the state integration from a global frame into *local increments* between navigation states. ![Local increment](/assets/images/local_increment-e8caa304d196e83d7aa8a45f34c13ae9.png) This small reformulation allows us to address most of the problems created by global state propagation, and is more computationally efficient to boot. Because of this shift in thinking, local increments don't even need to account for the starting navigation state's velocity nor correct specific-force measurements to accelerations. The local increment is simply the integral of intrinsically corrected IMU measurements between the start and the end navigation states. MetriCal optimizes the calibration and start and end navigation states to align with this preintegrated local increment. IMU Preintegration Basics If you want to learn more about IMU preintegration, we wrote a whole series on IMUs on the Tangram Vision Blog! There's a lot to know; we don't get to preintegration until [Part 5](https://www.tangramvision.com/blog/imu-preintegration-basics-part-5-of-5#imu-preintegration). --- # Interior Points to Plane Error Created by: All Camera-LiDAR pairs (optional) ## Overview[​](#overview "Direct link to Overview") Interior Points to Plane error reflects the error in fit between the LiDAR points observed on the surface of a circular target, and the actual target. Unlike other metrics, this metric is not generated in every run. This metric is only produced if the `detect_interior_points` flag is set to `true` in the `circle` object space detector. ![Interior Points to Plane](/assets/images/interior_points_to_plane-f88def986c2b3462c762636235324509.png) ## Definition[​](#definition "Direct link to Definition") Interior Points to Plane Error metrics contain the following fields: | Field | Type | Description | | -------------------------------- | ------------------------------------ | --------------------------------------------------------------------------------------------------------------- | | `metadata` | A common metadata object | The metadata associated with the point cloud this circle target was measured in. | | `object_space_id` | UUID | The UUID of the object space that was being observed. | | `plane_inliers_x` | An array of floats | All X-coordinates of points contained within the circle target. | | `plane_inliers_y` | An array of floats | All Y-coordinates of points contained within the circle target. | | `plane_inliers_z` | An array of floats | All Z-coordinates of points contained within the circle target. | | `world_extrinsics_component_ids` | An array of UUIDs | The camera UUIDs for each world extrinsic in `world_extrinsics` | | `world_extrinsics` | An array of world extrinsics objects | The world pose (camera from object space) that correspond to each `circle_center_misalignment` | | `plane_inliers_distances` | An array of arrays of floats | The point-to-plane distances between the plane inlier points and the plane observed at a given world extrinsic. | | `plane_distance_rmse_per_we` | An array of floats | Similar to `plane_inliers_distances` but represents the RMSE of all plane inlier distances. | | `plane_distance_rmse` | Float | The plane inlier distance RMSE over all world extrinsics. | ## Analysis[​](#analysis "Direct link to Analysis") This metric can be considered a companion metric to [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md). While circle misalignment measures the error between the observed circle center in LiDAR space and the circle center estimated from each camera, interior points to plane error uses those same derived extrinsics to estimate the error in fit between the LiDAR points observed on the surface of the circle target and the actual target. ### Why Optional?[​](#why-optional "Direct link to Why Optional?") This metric is not generated in every run; it's opt-in. This is because poor quality LiDAR can often given erroneous and erratic distance readings, even on flat surfaces. Using these observations in a calibration would just make things worse, not better. By making this metric optional, we allow the user the option to disregard these readings and just work with the circle center for calibration. --- # Object Inertial Extrinsics Error Created by: IMU ## Overview[​](#overview "Direct link to Overview") These metrics refer to the error between a sequence of measured and optimized extrinsics involving the IMU. Put in mathematical terms: ΓOE​≈ΓIMUE​⋅ΓCIMU​⋅ΓOC​ * E is the inertial frame, a gravity-aligned frame with its origin coincident with the first IMU navigation state * O is the object space * IMU is the current IMU navigation state * C is any component that can observe the object space directly, e.g. a camera The transform ΓOE​ is the *object inertial extrinsic*. This calculation is performed for every navigation state of the IMU, since a navigation state directly corresponds to a synced component's observations to the object space in question. Note that only one of these metrics is created per object space. This metric is closely related to [Composed Relative Extrinsics](/metrical/results/residual_metrics/composed_relative_extrinsics.md); the strategy is basically the same. However, since IMU cannot directly observe objects, we use the navigation states to infer the relative extrinsics. ![Object Inertial Extrinsics](/assets/images/oie_error-48ac5c97679eac82689c6338b0fe8943.png) ## Definition[​](#definition "Direct link to Definition") Object inertial extrinsics error metrics contain the following fields: | Field | Type | Description | | ------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------- | | `nav_component_id` | UUID | The UUID of the IMU component this metric was generated from. | | `navigation_states` | An array of extrinsics-velocity objects | The inferred navigation states of the object inertial extrinsic cost. | | `world_extrinsics` | An array of world extrinsics objects | The inferred world pose (component from world) of the object inertial extrinsic, ΓOC​ | | `component_relative_extrinsics` | An array of extrinsics objects | The inferred component relative extrinsics, ΓCIMU​ | | `object_inertial_extrinsics` | An array of extrinsics objects | The inferred object inertial extrinsics, ΓOIMU​ | | `misalignments` | An array of extrinsics objects | The residual error of each object inertial extrisnic. | --- # Paired 3D Point Error Created by: LiDAR-LiDAR pairs ## Overview[​](#overview "Direct link to Overview") LiDAR points can't be directly compared to each other, but their alignment can be inferred. When MetriCal calibrates two lidars to one another, it uses the detected circle centers from each LiDAR frame of reference to optimize. The difference in these circle centers is the Paired 3D Point Error. Note that the only LiDAR points used in this calculation are those detected on the retroreflective edge of the circle target; the interior points of the board are not used. There is a Paired 3D Point Error metric group for every pair of synced observations between LiDARs. ![Paired 3D Point Error](/assets/images/paired_3d_point-c132448357adbc0922fccd419fea7588.png) ## Description[​](#description "Direct link to Description") Paired 3D point error metrics contain the following fields: | Field | Type | Description | | --------------- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | | `from` | UUID | The "from" component that these point misalignments were computed in reference to. | | `to` | UUID | The "to" component tha these point misalignments were computed in reference to. | | `from_points` | An array of arrays of 3 floats | A collection of the XYZ coordinates of points in the "from" coordinate frame being matched against the `to_points`. | | `to_points` | An array of arrays of 3 floats | A collection of the XYZ coordinates of points in the "to" coordinate frame being matched against the `from_points`. | | `misalignments` | An array of arrays of 3 floats | The transformed distance misalignment (spilt up according to the Cartesian / XYZ axes) between the to and from points. | | `rmse` | Float | The root-mean-square-error of all the misalignments. | --- # Paired Plane Normal Error Created by: LiDAR-LiDAR pairs ## Overview[​](#overview "Direct link to Overview") LiDAR points can't be directly compared to each other, but their alignment can be inferred. When MetriCal calibrates two lidars to one another, it uses the detected plane normals of the circle target from each LiDAR frame of reference to optimize. The difference in these normals is the Paired Plane Normal Error. Note that the only LiDAR points used in this calculation are those detected on the retroreflective edge of the circle target; the interior points of the board are not used. There is a Paired Plane Normal Error metric group for every pair of synced observations between LiDARs. ![Paired Plane Error](/assets/images/paired_plane-66816dbb6e898ef2caff8adb984f2769.png) ## Description[​](#description "Direct link to Description") Paired plane normal error metrics contain the following fields: | Field | Type | Description | | --------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------- | | `from` | UUID | The "from" component that these normal misalignments were computed in reference to. | | `to` | UUID | The "to" component tha these normal misalignments were computed in reference to. | | `from_normals` | An array of arrays of 3 floats | A collection of the XYZ orientations of plane normals in the "from" coordinate frame being matched against the `to_normals`. | | `to_normals` | An array of arrays of 3 floats | A collection of the XYZ coordinates of plane normals in the "to" coordinate frame being matched against the `from_normals`. | | `misalignments` | An array of arrays of 3 floats | The transformed chordal distance misalignment (spilt up according to the Cartesian / XYZ axes) between the to and from plane normals. | | `rmse` | Float | The root-mean-square-error of all the misalignments. | --- # Calibrate RealSense Sensors Get the Code The full code for this tutorial can be found in the [RealSense Cal Flasher repository](https://gitlab.com/tangram-vision/oss/realsense-cal-flasher) on GitLab. ## Objectives[​](#objectives "Direct link to Objectives") * Record proper data from all RealSense sensors at the same time * Calibrate several RealSense sensors at once using MetriCal * Flash calibrations to each sensor *** One of the most popular applications of MetriCal is calibrating multiple Intel RealSense sensors at once. RealSense is one of the most popular 3D sensing technologies, and it's common to see multiple RealSense deployed on a system (even if they aren't all being used!). As with any sensor system, the calibration on a RealSense can drift over time, whether due to wear and tear, installation changes, or environmental factors. MetriCal can help you keep your RealSense sensors calibrated and accurate with minimal effort. This tutorial will focus on the D4xx series. ## Recording Data[​](#recording-data "Direct link to Recording Data") MetriCal is meant for convenience, so we recommend recording all sensors at the same time, in the same dataset. There is no need for multiple recordings! Every RealSense device streams rectified data unless set to a specific frame format and resolution. In other words, most data will have already applied the current calibration to the frame! We need to avoid this in our own setup. Here are the settings you'll want to apply for a calibration dataset: | Device Model | IR Stream Format | Stream Resolution | | ---------------- | ---------------- | ----------------- | | D400, D410, D415 | Y16 | 1920x1080 | | All other models | Y16 | 1280x800 | | Device Model | Color Stream Format | Stream Resolution | | ----------------- | ------------------- | ----------------- | | D415, D435, D435i | YUY2 | 1920x1080 | | D455 | YUY2 | 1280x800 | Easy Recording with RealSense-Rust Tangram Vision is also the maintainer of the [RealSense-Rust crate](https://gitlab.com/tangram-vision/oss/realsense-rust), which makes it easy to run and record RealSense data from the safety of Rust. The `record_bag` example in the repository is already configured with the above settings. Use that to quickly record a calibration ROSbag for MetriCal. ### Avoiding Motion Blur[​](#avoiding-motion-blur "Direct link to Avoiding Motion Blur") Whether or not a camera is global shutter (as many are on the RealSense line), motion blur can still affect image quality. This is the last thing you want when calibrating a camera, as it can lead to inaccurate results regardless of whether you're moving the sensor rig or the targets. To avoid motion blur, make sure to stand still every second or so during calibration. MetriCal will filter out frames in motion automatically when processing the data. For more on this and other tips for calibrating cameras, see the [Single Camera Calibration Guide](/metrical/calibration_guides/single_camera_cal.md). ### Identifying the Sensors[​](#identifying-the-sensors "Direct link to Identifying the Sensors") Make sure you've recorded the sensor data in a way that makes each stream identifiable. If you're recording in a ROSbag or MCAP format, it's enough to prefix each device topic with its serial or name, like `sensor1/color/image_raw` and `sensor2/color/image_raw`. If recording to a folder of images, keep in mind that MetriCal will expect all *component* folders to be in a single directory. This means that if you have two RealSense sensors, you'll need to record the data from both to the same folder: ``` observations/ |- sensor1-color/ |- 0001.png |- 0002.png |- ... |- sensor1-ir-left/ |- 0001.png |- 0002.png |- ... |- sensor1-ir-right/ |- 0001.png |- 0002.png |- ... |- sensor2-color/ |- 0001.png |- 0002.png |- ... ... ``` See the [input data formats documentation](/metrical/configuration/data_formats.md) for more. ### The Right Target[​](#the-right-target "Direct link to The Right Target") We at Tangram Vision always recommend using a [markerboard](/metrical/targets/target_overview.md) for camera calibration. In fact, you can use multiple boards, which would ensure good coverage and depth of field! See our documentation on [using multiple targets](/metrical/targets/multiple_targets.md) for more. ## Running MetriCal[​](#running-metrical "Direct link to Running MetriCal") Once you have the data all ready, running MetriCal is as easy as running any other dataset: ``` metrical init -m *:opencv_radtan $DATA $INIT_PLEX metrical calibrate -o realsense_cal.json $DATA $INIT_PLEX $OBJ # Get our calibrated plex from the results using jq jq .plex realsense_cal.json > realsense_cal_plex.json ``` Notice that we're setting all cameras (all topics: `*`) to use the `opencv_radtan` model. This is the model that Intel uses for all cameras on the RealSense line. And that's about it! This whole process can take from 10 seconds to 3 minutes, depending on how much data you used.You should now have a set of calibrations for all components, which we'll then flash to the sensors in the next step. First, though, if you're worried that the data you captured isn't the best for a calibration procedure... ### Seeding Extrinsics (Optional)[​](#seeding-extrinsics-optional "Direct link to Seeding Extrinsics (Optional)") You can seed the calibration using the [`--preset_device`](/metrical/commands/init.md#options) option during the Init phase: ``` metrical init -m *:opencv_radtan \ --preset-device RealSense435:["sensor1-ir-left","sensor1-ir-right","sensor1-color"] \ ... $DATA $INIT_PLEX ``` This tells the calibration to use the extrinsics from the RealSense 435 as a starting point for the topics "sensor1-ir-left", "sensor1-ir-right", and "sensor1-color". You can pass multiple `--preset-device` options to seed multiple sensors. Note that order matters when using this option. ## Flashing the Calibration to the Sensors[​](#flashing-the-calibration-to-the-sensors "Direct link to Flashing the Calibration to the Sensors") Now here's the nifty part: We can script this entire process! The results of MetriCal can be flashed directly to the RealSense sensors using an OSS tool created by Tangram Vision called [RealSense Cal Flasher](https://gitlab.com/tangram-vision/oss/realsense-cal-flasher). If you haven't, clone the repository and run through the "Setup" instructions. ### Sensor Config[​](#sensor-config "Direct link to Sensor Config") Once you've set things up, it's time to create your sensor config. This is another JSON we use especially for the RealSense Cal Flasher. It maps topics from our plex to the correct RealSense sensor using the serial number. Here's an example: ``` { "sensor_serial_one": { "left": "name_of_the_left_component_in_plex", "right": "name_of_the_right_component_in_plex", }, "sensor_serial_two": { ... } ... } ``` Fill that bad boy in, then run the flasher: ``` ./rs-cal-flasher realsense_cal_plex.json .json ``` All of your RealSense sensors should now be calibrated and ready to go! ## Summary[​](#summary "Direct link to Summary") 1. Take a calibration dataset 2. Run the following script: ``` metrical init -m *:opencv_radtan $DATA $INIT_PLEX metrical calibrate -o realsense_cal.json $DATA $INIT_PLEX $OBJ # Get our calibrated plex from the results using jq jq .plex realsense_cal.json > realsense_cal_plex.json ./rs-cal-flasher realsense_cal_plex.json .json ``` Boom. Done. ## Links and Resources[​](#links-and-resources "Direct link to Links and Resources") * Intel RealSense D400 Series Custom Calibration Whitepaper: * The Dynamic Calibration Programmer documentation for RealSense: --- # Migrate from Kalibr to MetriCal ## Objectives[​](#objectives "Direct link to Objectives") * Migrate your calibration process from Kalibr to MetriCal *** One of the most common questions we get is how to migrate from Kalibr to MetriCal. This tutorial will guide you through the process of migration, the main differences in operation, and what you should expect. ## Why Migrate?[​](#why-migrate "Direct link to Why Migrate?") We'll freely admit, Kalibr makes a lot of sense when first getting started with camera calibration. It's accurate, documented, and has a large user base. On top of that, it's open source, which is a huge draw for those who like to dive into the math. However, it is **not** designed for scalable production. It is an academic tool for small-scale projects. MetriCal is more robust, scalable, and maintainable, with a whole company devoted to continuously improving its user experience. Some improvements you can expect: * Drastically improved time to calibration (>10x improvement on large datasets) * Use multiple targets, not just one * Equal or greater accuracy in calibration results * Easily-digestable metrics across every component * More modalities supported out of the box * Cameras, IMU, LiDAR * Radar, sensor-to-chassis coming soon * More intrinsics models supported out of the box * Complete reporting of parameter covariances * Enhanced visualization tools ...among others. The good news is that much of your data collection processes can remain the same! ROSbags are supported, along with MCAPs, and MetriCal can use [Kalibr-style AprilGrids](/metrical/targets/target_overview.md) as targets. Whatever you're doing for data collection, just keep doing it! Once you're ready to scale your systems, we're confident that you'll find MetriCal to be the right tool for the job. ## Operational Differences[​](#operational-differences "Direct link to Operational Differences") ### Camchains vs. Plex[​](#camchains-vs-plex "Direct link to Camchains vs. Plex") The most significant difference between Kalibr and MetriCal is the way they handle extrinsics. Kalibr uses a "camchain" structure, where each component is connected to the next in a chain. In order to derive the extrinsics between two cameras, you must traverse the chain from one camera to the other. MetriCal uses the [plex](/metrical/core_concepts/plex_overview.md) structure, where each component is connected to every other component in a fully connected graph, along with a measurement of uncertainty along each connection. This means that there are actually several ways to derive the spatial constraints between two components, depending on the path you take through the graph. Spatial Constraint Traversal You can read more about how extrinsics are treated in the Plex by perusing the [Spatial Constraint Traversal](/metrical/core_concepts/constraints.md#spatial-constraints) documentation. In practice, it's best to use the [Shape](/metrical/commands/shape/shape_overview.md) command to derive the "best" (i.e. most precise) path between two components. If you would like to program your own pathfinding algorithm, you can use our reference implementation [found here](https://gitlab.com/tangram-vision/oss/spatial_constraint_traversal). ### Motion Filtering[​](#motion-filtering "Direct link to Motion Filtering") Artifacts from excessive motion can ruin a calibration. To prevent this fate, MetriCal has a built-in motion filter that automatically identifies and removes frames with excessive motion from your dataset. This is especially useful for large datasets where most of the data can be either redundant or noisy. Motion Filtering Read more about the motion filter in the documentation for [Calibrate](/metrical/commands/calibrate.md#motion-filtering) mode. Those used to Kalibr will be familiar with the `bag-freq` parameter, which is used to remove data in a more heuristic way by taking every Nth observation. MetriCal uses a fundamentally different philosophy; with the right data capture process, there should never be a reason to throw away data without cause. If you do find that you need to downsample your data, or you would like to use all your data during a calibration, we either recommend modifying the [camera motion threshold](/metrical/commands/calibrate.md#--camera-motion-threshold-camera_motion_threshold) or [disabling the motion filter](/metrical/commands/calibrate.md#--disable-motion-filter) entirely. ## Migrations[​](#migrations "Direct link to Migrations") ### Camera Model Conversion[​](#camera-model-conversion "Direct link to Camera Model Conversion") Here's a handy conversion chart for Kalibr camera models to MetriCal camera models: | Kalibr Model | Corresponding MetriCal Model | | ---------------- | -------------------------------------------------------------------------- | | `pinhole` | [`no_distortion`](/metrical/calibration_models/cameras.md#no-distortion) | | `pinhole-radtan` | [`opencv_radtan`](/metrical/calibration_models/cameras.md#opencv-radtan) | | `pinhole-equi` | [`opencv_fisheye`](/metrical/calibration_models/cameras.md#opencv-fisheye) | | `omni` | [`omni`](/metrical/calibration_models/cameras.md#omnidirectional-omni) | | `ds` | [`double_sphere`](/metrical/calibration_models/cameras.md#double-sphere) | | `eucm` | [`eucm`](/metrical/calibration_models/cameras.md#eucm) | ### Describing Your Target/Object Space[​](#describing-your-targetobject-space "Direct link to Describing Your Target/Object Space") The conversion between Kalibr and MetriCal is straightforward. The only difference is that MetriCal can support more than one target at a time, which means MetriCal has to add UUIDs to each. Here's an example of how to convert a Kalibr target YAML to a MetriCal object space JSON: #### Kalibr Target YAML[​](#kalibr-target-yaml "Direct link to Kalibr Target YAML") ``` target_type: "aprilgrid" tagCols: 5 tagRows: 6 tagSize: 0.088 tagSpacing: 0.3 ``` #### MetriCal Object Space JSON[​](#metrical-object-space-json "Direct link to MetriCal Object Space JSON") ``` { "object_spaces": { "24e6df7b-b756-4b9c-a719-660d45d796bf": { "descriptor": { "variances": [0.01, 0.01, 0.01] // Recommended default values }, "detector": { "april_grid": { "marker_dictionary": "ApriltagKalibr", "marker_grid_height": 6, "marker_grid_width": 5, "marker_length": 0.088, "tag_spacing": 0.3 } } } } } ``` ### Script Migration[​](#script-migration "Direct link to Script Migration") If you're used to running Kalibr from the command line, you'll find that MetriCal is quite similar; it just has a few more options. Here's an example of migrating a calibration script from Kalibr to MetriCal: #### Kalibr Command[​](#kalibr-command "Direct link to Kalibr Command") ``` kalibr_calibrate_cameras \ --bag data.bag \ --topics /cam_one /cam_two \ --models pinhole eucm \ --target target.yaml ``` #### MetriCal Command[​](#metrical-command "Direct link to MetriCal Command") ``` # Profile our data and create an initial plex metrical init \ -m /cam_one:no_distortion \ -m /cam_two:eucm \ data.bag init_plex.json # Calibrate our system metrical calibrate \ -o results.json \ data.bag init_plex.json target.json # Derive the minimum spanning tree of our plex metrical shape mst results.json . ``` --- # Experiment with Different Intrinsics Models ``` # Run an initial calibration [slower] metrical init -m *cam*:opencv_radtan $INIT_PLEX metrical calibrate -o $OPENCV_RES $DATA $INIT_PLEX $OBJ # Modify our init plex to use EUCM for all cameras, and recalibrate [fast] metrical init -y -m *cam*:eucm $INIT_PLEX metrical calibrate -o $EUCM_RES $DATA $INIT_PLEX $OBJ # The same operation, except we create a new init plex from the first one [fast] metrical init -m *cam*:eucm -p $INIT_PLEX $SECOND_INIT_PLEX metrical calibrate -o $EUCM_RES $DATA $SECOND_INIT_PLEX $OBJ ``` This workflow demonstrates how effortless it can be to play around with models using the same dataset. Note that the second and third runs of the [Init command](/metrical/commands/init.md) will result in the same plex, but in two different ways: * The second run tells MetriCal to overwrite (`-y`) the plex at `$INIT_PLEX`. MetriCal will use this `$INIT_PLEX` as a seed plex for the operation. It will modify the file in place. * The third run explicitly tells MetriCal to use `$INIT_PLEX` as a seed. It will output the resulting init plex to `$SECOND_INIT_PLEX`. Since the Calibrate command [caches detections](/metrical/commands/calibrate.md#cached-detections) between runs, the second and third runs of this data will process *much* faster than the first. This all results in a pain-free way to rapidly test several different models for your components. --- # Billing *** ## Creating a Subscription[​](#creating-a-subscription "Direct link to Creating a Subscription") To create a license for MetriCal, you must have an active (paid) subscription. To create a subscription, navigate to the Billing page in the Tangram Vision Hub. Choose the plan that you want to subscribe to, and click the “Subscribe” button. You will be sent to a Stripe billing page to enter payment information. ## Cancelling a Subscription[​](#cancelling-a-subscription "Direct link to Cancelling a Subscription") To cancel a subscription, navigate to the “Billing” page in the Hub and click “Manage Subscription” to enter the Stripe portal. Click “Cancel plan”. ## Adding, Changing, & Deleting Forms of Payment[​](#adding-changing--deleting-forms-of-payment "Direct link to Adding, Changing, & Deleting Forms of Payment") To access the Stripe interface and make a change to a payment method, navigate to the “Billing” page in the Hub. Click on the “Add Payment Method” button to add a credit card. If you have a paid subscription, manage your payment details via the “Manage Subscription” button. ## Adding & Updating Billing Contact[​](#adding--updating-billing-contact "Direct link to Adding & Updating Billing Contact") Each Tangram Vision account must have an up-to-date billing contact email. The default email address used will be the email address used when you created your Tangram Vision account. Should you wish to change the billing contact associated with your account, navigate to the Billing page and find the “Billing Contact” section. In the email field, enter the updated email address and click “Save Changes.” ![Screenshot of form for updating billing email address](/assets/images/hub_update_billing_contact-f35bf70527a46cb37fb02190fa235a7a.png) --- # Contact Us *** ## User Support[​](#user-support "Direct link to User Support") Your experience with MetriCal is important to us! If you have any questions, comments, or feedback, please reach out to us at . We appreciate your input and are here to help you get the most out of MetriCal. ## Partnering, White Labeling, and Licensing Inquiries[​](#partnering-white-labeling-and-licensing-inquiries "Direct link to Partnering, White Labeling, and Licensing Inquiries") MetriCal is a commercial product intended for enterprise-level autonomy at scale. We at Tangram Vision pride ourselves on providing the best possible calibration experience for our customers, and we're happy to work with you to find a licensing solution that fits your needs. If you are interested in white labeling or licensing MetriCal for your own use, please reach out to us at ## How to Cite MetriCal[​](#how-to-cite-metrical "Direct link to How to Cite MetriCal") First off, we're thrilled to help you further your research in perception! Please cite MetriCal to acknowledge its contribution to your work. This can be done by including a reference to MetriCal in the software or methods section of your paper. Suggested citation format: ``` @software{MetriCal, title = {MetriCal: Multimodal Calibration for Robotics and Automation at Scale}, author = {{Tangram Vision Development Team}}, url = {https://www.tangramvision.com}, version = {insert version number}, date = {insert date of usage}, year = {2025}, publisher = {{Tangram Robotics, Inc.}}, address = {Online}, } ``` Please replace "insert version number" with the version of MetriCal you used and "insert date of usage" with the date(s) you used the tool in your research. This citation format helps ensure that MetriCal's development team receives appropriate credit for their work and facilitates the tool's discovery by other researchers. --- # Legal Information *** ## Disclaimers[​](#disclaimers "Direct link to Disclaimers") Tangram Vision, Tangram Vision Platform, Plex, and the Tangram Vision logo are trademarks ™ of Tangram Robotics, Inc. The following Tangram Vision technologies and Platform components are protected under U.S. Patent No. EP4384360A2 (and/or other jurisdictions as applicable): Plex, components, constraints, and calibration processes. All content herein © 2021-2025 Tangram Robotics, Inc. All rights reserved. ## Confidentiality[​](#confidentiality "Direct link to Confidentiality") The information contained in the Tangram Vision Documentation is privileged and only for the information of the intended recipient and may not be modified, published or redistributed without the prior written consent of Tangram Robotics, Inc. ## 3rd Party Licenses[​](#3rd-party-licenses "Direct link to 3rd Party Licenses") The Tangram Vision Ecosystem of software is built in part thanks to the following Free and Open Source projects. We have provided links to each project as well as the original license text below. ### Apache 2.0 License[​](#apache-20-license "Direct link to Apache 2.0 License") 🔍 3rd party code licensed with Apache 2.0 * [ab\_glyph](//github.com/alexheretic/ab-glyph) * [ab\_glyph\_rasterizer](//github.com/alexheretic/ab-glyph) * [accesskit](//github.com/AccessKit/accesskit) * [accesskit\_consumer](//github.com/AccessKit/accesskit) * [accesskit\_unix](//github.com/AccessKit/accesskit) * [accesskit\_winit](//github.com/AccessKit/accesskit) * [addr2line](//github.com/gimli-rs/addr2line) * [adler](//github.com/jonas-schievink/adler/blob/master/LICENSE-APACHE) * [aead](//github.com/RustCrypto/traits) * [aes-gcm](//github.com/RustCrypto/AEADs) * [aes-soft](//github.com/RustCrypto/block-ciphers) * [aes](//github.com/RustCrypto/block-ciphers) * [ahash](//github.com/tkaitchuck/ahash) * [allocator-api2](//github.com/zakarumych/allocator-api2) * [anstream](//github.com/rust-cli/anstyle.git) * [anstyle-parse](//github.com/rust-cli/anstyle.git) * [anstyle-query](//github.com/rust-cli/anstyle) * [anstyle](//github.com/rust-cli/anstyle.git) * [anyhow](//github.com/dtolnay/anyhow) * [approx](//github.com/brendanzab/approx/blob/master/LICENSE) * [arboard](//github.com/1Password/arboard) * [argmin-math](//github.com/argmin-rs/argmin) * [argmin](//github.com/argmin-rs/argmin) * [array-init-cursor](//github.com/planus-org/planus) * [array-init](//github.com/Manishearth/array-init/) * [arrayvec](//github.com/bluss/arrayvec) * [arrow-array](//github.com/apache/arrow-rs) * [arrow-buffer](//github.com/apache/arrow-rs) * [arrow-data](//github.com/apache/arrow-rs) * [arrow-format](//github.com/DataEngineeringLabs/arrow-format) * [arrow-schema](//github.com/apache/arrow-rs) * [as-raw-xcb-connection](//github.com/psychon/as-raw-xcb-connection) * [ascii](//github.com/tomprogrammer/rust-ascii) * [ash](//github.com/MaikKlein/ash) * [async-attributes](//github.com/async-rs/async-attributes) * [async-broadcast](//github.com/smol-rs/async-broadcast) * [async-channel](//github.com/smol-rs/async-channel) * [async-executor](//github.com/smol-rs/async-executor) * [async-fs](//github.com/smol-rs/async-fs) * [async-global-executor](//github.com/Keruspe/async-global-executor) * [async-io](//github.com/smol-rs/async-io) * [async-lock](//github.com/smol-rs/async-lock) * [async-net](//github.com/smol-rs/async-net) * [async-once-cell](//github.com/danieldg/async-once-cell) * [async-process](//github.com/smol-rs/async-process) * [async-signal](//github.com/smol-rs/async-signal) * [async-std](//github.com/async-rs/async-std) * [async-task](//github.com/smol-rs/async-task) * [async-trait](//github.com/dtolnay/async-trait) * [atomic-waker](//github.com/smol-rs/atomic-waker) * [atspi-common](//github.com/odilia-app/atspi) * [atspi-connection](//github.com/odilia-app/atspi/) * [atspi-proxies](//github.com/odilia-app/atspi) * [atspi](//github.com/odilia-app/atspi) * [az](//gitlab.com/tspiteri/az) * [backtrace-ext](//github.com/gankra/backtrace-ext) * [backtrace](//github.com/rust-lang/backtrace-rs) * [base16ct](//github.com/RustCrypto/formats/tree/master/base16ct) * [base64](//github.com/marshallpierce/rust-base64) * [base64ct](//github.com/RustCrypto/formats/tree/master/base64ct) * [bit-set](//github.com/contain-rs/bit-set) * [bit-vec](//github.com/contain-rs/bit-vec) * [bit\_field](//github.com/phil-opp/rust-bit-field) * [bitflags](//github.com/bitflags/bitflags) * [bitstream-io](//github.com/tuffy/bitstream-io) * [block-buffer](//github.com/RustCrypto/utils) * [blocking](//github.com/smol-rs/blocking) * [bstr](//github.com/BurntSushi/bstr) * [bumpalo](//github.com/fitzgen/bumpalo) * [bytemuck](//github.com/Lokathor/bytemuck) * [bytemuck\_derive](//github.com/Lokathor/bytemuck) * [bzip2-sys](//github.com/alexcrichton/bzip2-rs) * [bzip2](//github.com/alexcrichton/bzip2-rs) * [cacache](//github.com/zkat/cacache-rs) * [cdr](//github.com/hrektts/cdr-rs) * [cfg-if](//github.com/alexcrichton/cfg-if) * [chrono](//github.com/chronotope/chrono) * [chunked\_transfer](//github.com/frewsxcv/rust-chunked-transfer) * [cipher](//github.com/RustCrypto/traits) * [clap-verbosity-flag](//github.com/clap-rs/clap-verbosity-flag) * [clap](//github.com/clap-rs/clap) * [clap\_builder](//github.com/clap-rs/clap) * [clap\_complete](//github.com/clap-rs/clap/tree/master/clap_complete) * [clap\_derive](//github.com/clap-rs/clap/tree/master/clap_derive) * [clap\_lex](//github.com/clap-rs/clap/tree/master/clap_lex) * [clean-path](//gitlab.com/foo-jin/clean-path) * [cli-table-derive](//github.com/devashishdxt/cli-table) * [cli-table](//github.com/devashishdxt/cli-table) * [codespan-reporting](//github.com/brendanzab/codespan) * [colorchoice](//github.com/rust-cli/anstyle) * [colorgrad](//github.com/mazznoer/colorgrad-rs) * [concurrent-queue](//github.com/smol-rs/concurrent-queue) * [const-oid](//github.com/RustCrypto/formats/tree/master/const-oid) * [const-random-macro](//github.com/tkaitchuck/constrandom) * [const-random](//github.com/tkaitchuck/constrandom) * [const\_fn](//github.com/taiki-e/const_fn) * [const\_soft\_float](//github.com/823984418/const_soft_float) * [constgebra](//github.com/knickish/constgebra) * [cookie](//github.com/SergioBenitez/cookie-rs) * [cpufeatures](//github.com/RustCrypto/utils) * [cpuid-bool](//github.com/RustCrypto/utils) * [crc32fast](//github.com/srijs/rust-crc32fast) * [crossbeam-channel](//github.com/crossbeam-rs/crossbeam) * [crossbeam-deque](//github.com/crossbeam-rs/crossbeam) * [crossbeam-epoch](//github.com/crossbeam-rs/crossbeam) * [crossbeam-queue](//github.com/crossbeam-rs/crossbeam) * [crossbeam-utils](//github.com/crossbeam-rs/crossbeam) * [crossbeam](//github.com/crossbeam-rs/crossbeam) * [crypto-bigint](//github.com/RustCrypto/crypto-bigint) * [crypto-common](//github.com/RustCrypto/traits) * [crypto-mac](//github.com/RustCrypto/traits) * [csscolorparser](//github.com/mazznoer/csscolorparser-rs) * [ctr](//github.com/RustCrypto/stream-ciphers) * [cursor-icon](//github.com/rust-windowing/cursor-icon) * [custom\_derive](//github.com/DanielKeep/rust-custom-derive/tree/custom_derive-master) * [data-url](//github.com/servo/rust-url) * [delegate](//github.com/kobzol/rust-delegate) * [der](//github.com/RustCrypto/formats/tree/master/der) * [deranged](//github.com/jhpratt/deranged) * [digest](//github.com/RustCrypto/traits) * [directories](//github.com/soc/directories-rs) * [dirs-next](//github.com/xdg-rs/dirs) * [dirs-sys](//github.com/dirs-dev/dirs-sys-rs) * [dns-lookup](//github.com/keeperofdakeys/dns-lookup/) * [document-features](//github.com/slint-ui/document-features) * [downcast-rs](//github.com/marcianx/downcast-rs) * [drawille](//github.com/ftxqxd/drawille-rs) * [dyn-clone](//github.com/dtolnay/dyn-clone) * [ecdsa](//github.com/RustCrypto/signatures/tree/master/ecdsa) * [ecolor](//github.com/emilk/egui) * [ed25519](//github.com/RustCrypto/signatures/tree/master/ed25519) * [eframe](//github.com/emilk/egui/tree/master/crates/eframe) * [egui-wgpu](//github.com/emilk/egui/tree/master/crates/egui-wgpu) * [egui-winit](//github.com/emilk/egui/tree/master/crates/egui-winit) * [egui](//github.com/emilk/egui) * [egui\_commonmark](//github.com/lampsitter/egui_commonmark) * [egui\_commonmark\_backend](//github.com/lampsitter/egui_commonmark) * [egui\_extras](//github.com/emilk/egui) * [egui\_glow](//github.com/emilk/egui/tree/master/crates/egui_glow) * [egui\_plot](//github.com/emilk/egui) * [egui\_tiles](//github.com/rerun-io/egui_tiles) * [ehttp](//github.com/emilk/ehttp) * [either](//github.com/bluss/either) * [elliptic-curve](//github.com/RustCrypto/traits/tree/master/elliptic-curve) * [emath](//github.com/emilk/egui/tree/master/crates/emath) * [encoding\_rs](//github.com/hsivonen/encoding_rs) * [enum-as-inner](//github.com/bluejekyll/enum-as-inner) * [enum-map-derive](//codeberg.org/xfix/enum-map) * [enum-map](//codeberg.org/xfix/enum-map) * [enum\_dispatch](//gitlab.com/antonok/enum_dispatch) * [enumflags2](//github.com/meithecatte/enumflags2) * [enumflags2\_derive](//github.com/meithecatte/enumflags2) * [enumn](//github.com/dtolnay/enumn) * [enumset](//github.com/Lymia/enumset) * [enumset\_derive](//github.com/Lymia/enumset) * [env\_logger](//github.com/rust-cli/env_logger) * [equivalent](//github.com/cuviper/equivalent) * [errno](//github.com/lambda-fairy/rust-errno) * [ethnum](//github.com/nlordell/ethnum-rs) * [event-listener-strategy](//github.com/smol-rs/event-listener) * [event-listener](//github.com/smol-rs/event-listener) * [ewebsock](//github.com/rerun-io/ewebsock) * [fastrand](//github.com/smol-rs/fastrand) * [fdeflate](//github.com/image-rs/fdeflate) * [ff](//github.com/zkcrypto/ff) * [filetime](//github.com/alexcrichton/filetime) * [fixed](//gitlab.com/tspiteri/fixed) * [fixedbitset](//github.com/petgraph/fixedbitset) * [flate2](//github.com/rust-lang/flate2-rs) * [flume](//github.com/zesterer/flume) * [fnv](//github.com/servo/rust-fnv) * [form\_urlencoded](//github.com/servo/rust-url) * [futures-channel](//github.com/rust-lang/futures-rs) * [futures-core](//github.com/rust-lang/futures-rs) * [futures-executor](//github.com/rust-lang/futures-rs) * [futures-io](//github.com/rust-lang/futures-rs) * [futures-lite](//github.com/smol-rs/futures-lite) * [futures-macro](//github.com/rust-lang/futures-rs) * [futures-sink](//github.com/rust-lang/futures-rs) * [futures-task](//github.com/rust-lang/futures-rs) * [futures-timer](//github.com/rust-lang/futures-rs) * [futures-util](//github.com/rust-lang/futures-rs) * [futures](//github.com/rust-lang/futures-rs) * [geo-types](//github.com/georust/geo) * [gethostname](//github.com/lunaryorn/gethostname.rs) * [getrandom](//github.com/rust-random/getrandom) * [ghash](//github.com/RustCrypto/universal-hashes) * [gif](//github.com/image-rs/image-gif) * [gimli](//github.com/gimli-rs/gimli) * [glam](//github.com/bitshifter/glam-rs) * [glow](//github.com/grovesNL/glow) * [gltf-derive](//github.com/gltf-rs/gltf) * [gltf-json](//github.com/gltf-rs/gltf) * [gltf](//github.com/gltf-rs/gltf) * [gpu-alloc-types](//github.com/zakarumych/gpu-alloc) * [gpu-alloc](//github.com/zakarumych/gpu-alloc) * [gpu-descriptor-types](//github.com/zakarumych/gpu-descriptor) * [gpu-descriptor](//github.com/zakarumych/gpu-descriptor) * [group](//github.com/zkcrypto/group) * [half](//github.com/starkat99/half-rs) * [hash\_hasher](//github.com/Fraser999/Hash-Hasher.git) * [hash32](//github.com/japaric/hash32) * [hashbrown](//github.com/rust-lang/hashbrown) * [heapless](//github.com/rust-embedded/heapless) * [heck](//github.com/withoutboats/heck) * [hex](//github.com/KokaKiwi/rust-hex) * [hexasphere](//github.com/OptimisticPeach/hexasphere.git) * [hkdf](//github.com/RustCrypto/KDFs/) * [hmac](//github.com/RustCrypto/MACs) * [home](//github.com/rust-lang/cargo) * [http-cache-reqwest](//github.com/06chaynes/http-cache) * [http-cache](//github.com/06chaynes/http-cache) * [http-client](//github.com/http-rs/http-client) * [http-serde](//gitlab.com/kornelski/http-serde) * [http-types](//github.com/http-rs/http-types) * [http](//github.com/hyperium/http) * [httparse](//github.com/seanmonstar/httparse) * [httpdate](//github.com/pyfisch/httpdate) * [humantime](//github.com/tailhook/humantime) * [hyper-rustls](//github.com/rustls/hyper-rustls) * [iana-time-zone](//github.com/strawlab/iana-time-zone) * [ident\_case](//github.com/TedDriggs/ident_case) * [idna](//github.com/servo/rust-url/) * [indexmap](//github.com/bluss/indexmap) * [io-lifetimes](//github.com/sunfishcode/io-lifetimes) * [ipnet](//github.com/krisprice/ipnet) * [itertools](//github.com/rust-itertools/itertools) * [itoa](//github.com/dtolnay/itoa) * [jpeg-decoder](//github.com/image-rs/jpeg-decoder) * [js-sys](//github.com/rustwasm/wasm-bindgen/tree/master/crates/js-sys) * [kdtree](//github.com/mrhooray/kdtree-rs) * [khronos-egl](//github.com/timothee-haudebourg/khronos-egl) * [kurbo](//github.com/linebender/kurbo) * [kv-log-macro](//github.com/yoshuawuyts/kv-log-macro) * [lazy\_static](//github.com/rust-lang-nursery/lazy-static.rs) * [libc](//github.com/rust-lang/libc) * [libm](//github.com/rust-lang/libm) * [libnghttp2-sys](//github.com/alexcrichton/nghttp2-rs) * [libtbb](//github.com/oneapi-src/oneTBB) * [libz-sys](//github.com/rust-lang/libz-sys) * [linfa-clustering](//github.com/rust-ml/linfa/) * [linfa-linalg](//github.com/rust-ml/linfa-linalg) * [linfa-nn](//github.com/rust-ml/linfa/) * [linfa](//github.com/rust-ml/linfa) * [linked-hash-map](//github.com/contain-rs/linked-hash-map) * [litrs](//github.com/LukasKalbertodt/litrs/) * [lock\_api](//github.com/Amanieu/parking_lot) * [log-once](//github.com/Luthaf/log-once) * [log](//github.com/rust-lang/log) * [maplit](//github.com/bluss/maplit) * [matrixmultiply](//github.com/bluss/matrixmultiply/) * [memmap2](//github.com/RazrFalcon/memmap2-rs) * [memory-stats](//github.com/Arc-blroth/memory-stats) * [mime](//github.com/hyperium/mime) * [minimal-lexical](//github.com/Alexhuszagh/minimal-lexical) * [miniz\_oxide](//github.com/Frommi/miniz_oxide/tree/master/miniz_oxide) * [naga](//github.com/gfx-rs/wgpu/tree/trunk/naga) * [nalgebra-macros](//github.com/dimforge/nalgebra) * [nalgebra-sparse](//github.com/dimforge/nalgebra) * [nalgebra](//github.com/dimforge/nalgebra) * [ndarray-rand](//github.com/rust-ndarray/ndarray) * [ndarray-stats](//github.com/rust-ndarray/ndarray-stats) * [ndarray](//github.com/rust-ndarray/ndarray) * [nohash-hasher](//github.com/paritytech/nohash-hasher) * [noisy\_float](//github.com/SergiusIW/noisy_float-rs) * [num-bigint](//github.com/rust-num/num-bigint) * [num-complex](//github.com/rust-num/num-complex) * [num-derive](//github.com/rust-num/num-derive) * [num-integer](//github.com/rust-num/num-integer) * [num-iter](//github.com/rust-num/num-iter) * [num-rational](//github.com/rust-num/num-rational) * [num-traits](//github.com/rust-num/num-traits) * [num](//github.com/rust-num/num) * [num\_cpus](//github.com/seanmonstar/num_cpus) * [num\_threads](//github.com/jhpratt/num_threads) * [object](//github.com/gimli-rs/object) * [once\_cell](//github.com/matklad/once_cell) * [opaque-debug](//github.com/RustCrypto/utils) * [opencv](//github.com/opencv/opencv) * [openssl-probe](//github.com/alexcrichton/openssl-probe) * [order-stat](//github.com/huonw/order-stat) * [ordered-stream](//github.com/danieldg/ordered-stream) * [owned\_ttf\_parser](//github.com/alexheretic/owned-ttf-parser) * [p256](//github.com/RustCrypto/elliptic-curves/tree/master/p256) * [p384](//github.com/RustCrypto/elliptic-curves/tree/master/p384) * [parking](//github.com/smol-rs/parking) * [parking\_lot](//github.com/Amanieu/parking_lot) * [parking\_lot\_core](//github.com/Amanieu/parking_lot) * [paste](//github.com/dtolnay/paste) * [percent-encoding](//github.com/servo/rust-url/) * [petgraph](//github.com/petgraph/petgraph) * [pin-project-internal](//github.com/taiki-e/pin-project) * [pin-project-lite](//github.com/taiki-e/pin-project-lite) * [pin-project](//github.com/taiki-e/pin-project) * [pin-utils](//github.com/rust-lang-nursery/pin-utils) * [piper](//github.com/notgull/piper) * [pkcs8](//github.com/RustCrypto/formats/tree/master/pkcs8) * [planus](//github.com/planus-org/planus) * [png](//github.com/image-rs/image-png) * [poll-promise](//github.com/EmbarkStudios/poll-promise) * [polling](//github.com/smol-rs/polling) * [pollster](//github.com/zesterer/pollster) * [polyval](//github.com/RustCrypto/universal-hashes) * [portable-atomic](//github.com/taiki-e/portable-atomic) * [powerfmt](//github.com/jhpratt/powerfmt) * [ppv-lite86](//github.com/cryptocorrosion/cryptocorrosion) * [primal-check](//github.com/huonw/primal) * [primeorder](//github.com/RustCrypto/elliptic-curves/tree/master/primeorder) * [proc-macro-crate](//github.com/bkchr/proc-macro-crate) * [proc-macro-hack](//github.com/dtolnay/proc-macro-hack) * [proc-macro2](//github.com/alexcrichton/proc-macro2) * [profiling-procmacros](//github.com/aclysma/profiling) * [profiling](//github.com/aclysma/profiling) * [proptest-derive](//github.com/AltSysrq/proptest) * [proptest](//github.com/proptest-rs/proptest) * [puffin](//github.com/EmbarkStudios/puffin) * [puffin\_http](//github.com/EmbarkStudios/puffin) * [qoi](//github.com/aldanor/qoi-rust) * [quick-error](//github.com/tailhook/quick-error) * [quote](//github.com/dtolnay/quote) * [rand](//github.com/rust-random/rand) * [rand\_chacha](//github.com/rust-random/rand) * [rand\_core](//github.com/rust-random/rand) * [rand\_distr](//github.com/rust-random/rand) * [rand\_xorshift](//github.com/rust-random/rngs) * [rand\_xoshiro](//github.com/rust-random/rngs) * [raw-window-handle](//github.com/rust-windowing/raw-window-handle) * [rawpointer](//github.com/bluss/rawpointer/) * [rayon-core](//github.com/rayon-rs/rayon) * [rayon](//github.com/rayon-rs/rayon) * [re\_analytics](//github.com/rerun-io/rerun) * [re\_blueprint\_tree](//github.com/rerun-io/rerun) * [re\_build\_info](//github.com/rerun-io/rerun) * [re\_case](//github.com/rerun-io/rerun) * [re\_chunk](//github.com/rerun-io/rerun) * [re\_chunk\_store](//github.com/rerun-io/rerun) * [re\_component\_ui](//github.com/rerun-io/rerun) * [re\_context\_menu](//github.com/rerun-io/rerun) * [re\_crash\_handler](//github.com/rerun-io/rerun) * [re\_data\_loader](//github.com/rerun-io/rerun) * [re\_data\_source](//github.com/rerun-io/rerun) * [re\_data\_ui](//github.com/rerun-io/rerun) * [re\_entity\_db](//github.com/rerun-io/rerun) * [re\_error](//github.com/rerun-io/rerun) * [re\_format](//github.com/rerun-io/rerun) * [re\_format\_arrow](//github.com/rerun-io/rerun) * [re\_int\_histogram](//github.com/rerun-io/rerun) * [re\_log](//github.com/rerun-io/rerun) * [re\_log\_encoding](//github.com/rerun-io/rerun) * [re\_log\_types](//github.com/rerun-io/rerun) * [re\_math](//github.com/rerun-io/rerun) * [re\_memory](//github.com/rerun-io/rerun) * [re\_query](//github.com/rerun-io/rerun) * [re\_renderer](//github.com/rerun-io/rerun) * [re\_sdk](//github.com/rerun-io/rerun) * [re\_sdk\_comms](//github.com/rerun-io/rerun) * [re\_selection\_panel](//github.com/rerun-io/rerun) * [re\_smart\_channel](//github.com/rerun-io/rerun) * [re\_space\_view](//github.com/rerun-io/rerun) * [re\_space\_view\_bar\_chart](//github.com/rerun-io/rerun) * [re\_space\_view\_dataframe](//github.com/rerun-io/rerun) * [re\_space\_view\_spatial](//github.com/rerun-io/rerun) * [re\_space\_view\_tensor](//github.com/rerun-io/rerun) * [re\_space\_view\_text\_document](//github.com/rerun-io/rerun) * [re\_space\_view\_text\_log](//github.com/rerun-io/rerun) * [re\_space\_view\_time\_series](//github.com/rerun-io/rerun) * [re\_string\_interner](//github.com/rerun-io/rerun) * [re\_time\_panel](//github.com/rerun-io/rerun) * [re\_tracing](//github.com/rerun-io/rerun) * [re\_tuid](//github.com/rerun-io/rerun) * [re\_types](//github.com/rerun-io/rerun) * [re\_types\_blueprint](//github.com/rerun-io/rerun) * [re\_types\_core](//github.com/rerun-io/rerun) * [re\_viewer](//github.com/rerun-io/rerun) * [re\_viewer\_context](//github.com/rerun-io/rerun) * [re\_viewport\_blueprint](//github.com/rerun-io/rerun) * [re\_web\_viewer\_server](//github.com/rerun-io/rerun) * [re\_ws\_comms](//github.com/rerun-io/rerun) * [realsense-rust](//gitlab.com/tangram-vision-oss/realsense-rust) * [realsense-sys](//gitlab.com/tangram-vision-oss/realsense-rust) * [reflink-copy](//github.com/cargo-bins/reflink-copy) * [regex-automata](//github.com/rust-lang/regex/tree/master/regex-automata) * [regex-syntax](//github.com/rust-lang/regex/tree/master/regex-syntax) * [regex](//github.com/rust-lang/regex) * [renderdoc-sys](//github.com/ebkalderon/renderdoc-rs) * [reqwest-middleware](//github.com/TrueLayer/reqwest-middleware) * [reqwest](//github.com/seanmonstar/reqwest) * [rerun](//github.com/rerun-io/rerun) * [ring-compat](//github.com/RustCrypto/ring-compat) * [ron](//github.com/ron-rs/ron) * [roxmltree](//github.com/RazrFalcon/roxmltree) * [rstar](//github.com/georust/rstar) * [rustc-demangle](//github.com/alexcrichton/rustc-demangle) * [rustc-hash](//github.com/rust-lang-nursery/rustc-hash) * [rustfft](//github.com/ejmahler/RustFFT) * [rustls-pemfile](//github.com/rustls/pemfile) * [rustls](//github.com/rustls/rustls) * [rustversion](//github.com/dtolnay/rustversion) * [rusty-fork](//github.com/altsysrq/rusty-fork) * [ryu](//github.com/dtolnay/ryu) * [safe\_arch](//github.com/Lokathor/safe_arch) * [scoped-tls](//github.com/alexcrichton/scoped-tls) * [scopeguard](//github.com/bluss/scopeguard) * [sct](//github.com/rustls/sct.rs) * [sec1](//github.com/RustCrypto/formats/tree/master/sec1) * [seq-macro](//github.com/dtolnay/seq-macro) * [serde-big-array](https://github.com/est31/serde-big-array) * [serde](//github.com/serde-rs/serde) * [serde\_bytes](//github.com/serde-rs/bytes) * [serde\_derive](//github.com/serde-rs/serde) * [serde\_derive\_internals](//github.com/serde-rs/serde) * [serde\_json](//github.com/serde-rs/json) * [serde\_qs](//github.com/samscott89/serde_qs) * [serde\_repr](//github.com/dtolnay/serde-repr) * [serde\_spanned](//github.com/toml-rs/toml) * [serde\_urlencoded](//github.com/nox/serde_urlencoded) * [serde\_yaml](//github.com/dtolnay/serde-yaml) * [sha-1](//github.com/RustCrypto/hashes) * [sha1](//github.com/RustCrypto/hashes) * [sha2](//github.com/RustCrypto/hashes) * [signal-hook-registry](//github.com/vorner/signal-hook) * [signature](//github.com/RustCrypto/traits/tree/master/signature) * [simba](//github.com/dimforge/simba) * [simdutf8](//github.com/rusticstuff/simdutf8) * [similar-asserts](//github.com/mitsuhiko/similar-asserts) * [similar](//github.com/mitsuhiko/similar) * [simplecss](//github.com/linebender/simplecss) * [siphasher](//github.com/jedisct1/rust-siphash) * [smallvec](//github.com/servo/rust-smallvec) * [smol\_str](//github.com/rust-analyzer/smol_str) * [socket2](//github.com/rust-lang/socket2) * [spinning\_top](//github.com/rust-osdev/spinning_top) * [spirv](//github.com/gfx-rs/rspirv) * [spki](//github.com/RustCrypto/formats/tree/master/spki) * [sprs](//github.com/sparsemat/sprs) * [ssri](//github.com/zkat/ssri-rs) * [stable\_deref\_trait](//github.com/storyyeller/stable_deref_trait) * [standback](//github.com/jhpratt/standback) * [static\_assertions](//github.com/nvzqz/static-assertions-rs) * [strength\_reduce](//github.com/ejmahler/strength_reduce) * [sublime\_fuzzy](//github.com/Schlechtwetterfront/fuzzy-rs) * [surf](//github.com/http-rs/surf) * [svgtypes](//github.com/RazrFalcon/svgtypes) * [syn](//github.com/dtolnay/syn) * [sync\_wrapper](//github.com/Actyx/sync_wrapper) * [task-local-extensions](//github.com/TrueLayer/task-local-extensions) * [tempfile](//github.com/Stebalien/tempfile) * [terminal\_size](//github.com/eminence/terminal-size) * [thiserror-impl](//github.com/dtolnay/thiserror) * [thiserror](//github.com/dtolnay/thiserror) * [time-core](//github.com/time-rs/time) * [time-macros-impl](//github.com/time-rs/time) * [time-macros](//github.com/time-rs/time) * [time](//github.com/time-rs/time) * [tiny-http](//github.com/tiny-http/tiny-http) * [tinyvec](//github.com/Lokathor/tinyvec) * [tinyvec\_macros](//github.com/Soveu/tinyvec_macros) * [tokio-rustls](//github.com/rustls/tokio-rustls) * [toml](//github.com/toml-rs/toml) * [toml\_datetime](//github.com/toml-rs/toml) * [toml\_edit](//github.com/toml-rs/toml) * [transpose](//github.com/ejmahler/transpose) * [ttf-parser](//github.com/RazrFalcon/ttf-parser) * [tungstenite](//github.com/snapview/tungstenite-rs) * [type-map](//github.com/kardeiz/type-map) * [typenum](//github.com/paholg/typenum) * [unarray](//github.com/cameron1024/unarray) * [unicase](//github.com/seanmonstar/unicase) * [unicode-bidi](//github.com/servo/unicode-bidi) * [unicode-normalization](//github.com/unicode-rs/unicode-normalization) * [unicode-segmentation](//github.com/unicode-rs/unicode-segmentation) * [unicode-width](//github.com/unicode-rs/unicode-width) * [unicode-xid](//github.com/unicode-rs/unicode-xid) * [unindent](//github.com/dtolnay/indoc) * [universal-hash](//github.com/RustCrypto/traits) * [unzip-n](//github.com/mexus/unzip-n) * [urdf-rs](//github.com/openrr/urdf-rs) * [ureq](//github.com/algesten/ureq) * [url](//github.com/servo/rust-url) * [utf-8](//github.com/SimonSapin/rust-utf8) * [utf8parse](//github.com/alacritty/vte) * [uuid](//github.com/uuid-rs/uuid) * [value-bag](//github.com/sval-rs/value-bag) * [vec1](//github.com/rustonaut/vec1/) * [wait-timeout](//github.com/alexcrichton/wait-timeout) * [waker-fn](//github.com/smol-rs/waker-fn) * [wasm-bindgen-backend](//github.com/rustwasm/wasm-bindgen/tree/master/crates/backend) * [wasm-bindgen-futures](//github.com/rustwasm/wasm-bindgen/tree/master/crates/futures) * [wasm-bindgen-macro-support](//github.com/rustwasm/wasm-bindgen/tree/master/crates/macro-support) * [wasm-bindgen-macro](//github.com/rustwasm/wasm-bindgen/tree/master/crates/macro) * [wasm-bindgen-shared](//github.com/rustwasm/wasm-bindgen/tree/master/crates/shared) * [wasm-bindgen](//github.com/rustwasm/wasm-bindgen) * [web-sys](//github.com/rustwasm/wasm-bindgen/tree/master/crates/web-sys) * [web-time](//github.com/daxpedda/web-time) * [webbrowser](//github.com/amodm/webbrowser-rs) * [weezl](//github.com/image-rs/lzw) * [wgpu-core](//github.com/gfx-rs/wgpu) * [wgpu-hal](//github.com/gfx-rs/wgpu) * [wgpu-types](//github.com/gfx-rs/wgpu) * [wgpu](//github.com/gfx-rs/wgpu) * [wide](//github.com/Lokathor/wide) * [winit](//github.com/rust-windowing/winit) * [x11rb-protocol](//github.com/psychon/x11rb) * [x11rb](//github.com/psychon/x11rb) * [xkeysym](//github.com/notgull/xkeysym) * [zerocopy](//github.com/google/zerocopy) * [zeroize](//github.com/RustCrypto/utils/tree/master/zeroize) * [zstd-safe](//github.com/gyscos/zstd-rs) * [zstd-sys](//github.com/gyscos/zstd-rs) * [zune-core](//github.com/etemesi254/zune-image) * [zune-inflate](//github.com/etemesi254/zune-image) * [zune-jpeg](//github.com/etemesi254/zune-image/tree/dev/zune-jpeg) 📋 Original Apache 2.0 License ``` Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ### Apache-2.0 and OFL-1.1 / UFL-1.0[​](#apache-20-and-ofl-11--ufl-10 "Direct link to Apache-2.0 and OFL-1.1 / UFL-1.0") 🔍 3rd party code licensed as Apache 2.0 and packaging fonts via the OFL / UFL * [epaint](//github.com/emilk/egui/tree/master/crates/epaint) * [re\_ui](//github.com/rerun-io/rerun) 📋 Original OFL 1.1 License Text ``` PREAMBLE The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others. The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives. DEFINITIONS “Font Software” refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation. “Reserved Font Name” refers to any names specified as such after the copyright statement(s). “Original Version” refers to the collection of Font Software components as distributed by the Copyright Holder(s). “Modified Version” refers to any derivative made by adding to, deleting, or substituting – in part or in whole – any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment. “Author” refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software. PERMISSION & CONDITIONS Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions: 1) Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself. 2) Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user. 3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users. 4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission. 5) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software. TERMINATION This license becomes null and void if any of the above conditions are not met. DISCLAIMER THE FONT SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE. ``` 📋 Original UFL 1.1 License Text ``` UBUNTU FONT LICENCE Version 1.0 PREAMBLE This licence allows the licensed fonts to be used, studied, modified and redistributed freely. The fonts, including any derivative works, can be bundled, embedded, and redistributed provided the terms of this licence are met. The fonts and derivatives, however, cannot be released under any other licence. The requirement for fonts to remain under this licence does not require any document created using the fonts or their derivatives to be published under this licence, as long as the primary purpose of the document is not to be a vehicle for the distribution of the fonts. DEFINITIONS "Font Software" refers to the set of files released by the Copyright Holder(s) under this licence and clearly marked as such. This may include source files, build scripts and documentation. "Original Version" refers to the collection of Font Software components as received under this licence. "Modified Version" refers to any derivative made by adding to, deleting, or substituting -- in part or in whole -- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment. "Copyright Holder(s)" refers to all individuals and companies who have a copyright ownership of the Font Software. "Substantially Changed" refers to Modified Versions which can be easily identified as dissimilar to the Font Software by users of the Font Software comparing the Original Version with the Modified Version. To "Propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification and with or without charging a redistribution fee), making available to the public, and in some countries other activities as well. PERMISSION & CONDITIONS This licence does not grant any rights under trademark law and all such rights are reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to propagate the Font Software, subject to the below conditions: 1) Each copy of the Font Software must contain the above copyright notice and this licence. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine- readable metadata fields within text or binary files as long as those fields can be easily viewed by the user. 2) The font name complies with the following: (a) The Original Version must retain its name, unmodified. (b) Modified Versions which are Substantially Changed must be renamed to avoid use of the name of the Original Version or similar names entirely. (c) Modified Versions which are not Substantially Changed must be renamed to both (i) retain the name of the Original Version and (ii) add additional naming elements to distinguish the Modified Version from the Original Version. The name of such Modified Versions must be the name of the Original Version, with "derivative X" where X represents the name of the new work, appended to that name. 3) The name(s) of the Copyright Holder(s) and any contributor to the Font Software shall not be used to promote, endorse or advertise any Modified Version, except (i) as required by this licence, (ii) to acknowledge the contribution(s) of the Copyright Holder(s) or (iii) with their explicit written permission. 4) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this licence, and must not be distributed under any other licence. The requirement for fonts to remain under this licence does not affect any document created using the Font Software, except any version of the Font Software extracted from a document created using the Font Software may only be distributed under this licence. TERMINATION This licence becomes null and void if any of the above conditions are not met. DISCLAIMER THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE. ``` ### Apache-2.0 and Unicode-DFS-2016[​](#apache-20-and-unicode-dfs-2016 "Direct link to Apache-2.0 and Unicode-DFS-2016") 🔍 3rd party code licensed as Apache 2.0 and Unicode DFS 2016 * [unicode-ident](//github.com/dtolnay/unicode-ident) 📋 Original Unicode DFS 2016 License ``` UNICODE LICENSE V3 COPYRIGHT AND PERMISSION NOTICE Copyright © 1991-2024 Unicode, Inc. NOTICE TO USER: Carefully read the following legal agreement. BY DOWNLOADING, INSTALLING, COPYING OR OTHERWISE USING DATA FILES, AND/OR SOFTWARE, YOU UNEQUIVOCALLY ACCEPT, AND AGREE TO BE BOUND BY, ALL OF THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE, DO NOT DOWNLOAD, INSTALL, COPY, DISTRIBUTE OR USE THE DATA FILES OR SOFTWARE. Permission is hereby granted, free of charge, to any person obtaining a copy of data files and any associated documentation (the "Data Files") or software and any associated documentation (the "Software") to deal in the Data Files or Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and/or sell copies of the Data Files or Software, and to permit persons to whom the Data Files or Software are furnished to do so, provided that either (a) this copyright and permission notice appear with all copies of the Data Files or Software, or (b) this copyright and permission notice appear in associated Documentation. THE DATA FILES AND SOFTWARE ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THE DATA FILES OR SOFTWARE. Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to promote the sale, use or other dealings in these Data Files or Software without prior written authorization of the copyright holder. ``` ### MIT License[​](#mit-license "Direct link to MIT License") 🔍 3rd party code licensed with MIT * [aho-corasick](//github.com/BurntSushi/aho-corasick) * [aligned-vec](//github.com/sarah-ek/aligned-vec) * [ansi-to-html](//github.com/Aloso/to-html) * [arg\_enum\_proc\_macro](//github.com/lu-zero/arg_enum_proc_macro) * [ashpd](//github.com/bilelmoussaoui/ashpd) * [atty](//github.com/softprops/atty) * [bincode](//github.com/servo/bincode) * [binrw](//github.com/jam1garner/binrw) * [binrw\_derive](//github.com/jam1garner/binrw) * [byteorder](//github.com/BurntSushi/byteorder) * [bytes](//github.com/tokio-rs/bytes) * [calloop-wayland-source](//github.com/smithay/calloop-wayland-source) * [calloop](//github.com/Smithay/calloop) * [cfb](//github.com/mdsteele/rust-cfb) * [color\_quant](//github.com/image-rs/color_quant.git) * [comfy-table](//github.com/nukesor/comfy-table) * [console](//github.com/console-rs/console) * [conv](//github.com/DanielKeep/rust-conv) * [convert\_case](//github.com/rutrum/convert-case) * [crossterm](//github.com/crossterm-rs/crossterm) * [crunchy](//github.com/eira-fransham/crunchy) * [csv-core](//github.com/BurntSushi/rust-csv) * [csv](//github.com/BurntSushi/rust-csv) * [darling](//github.com/TedDriggs/darling) * [darling\_core](//github.com/TedDriggs/darling) * [darling\_macro](//github.com/TedDriggs/darling) * [derive-getters](//git.sr.ht/~kvsari/derive-getters) * [derive\_more](//github.com/JelteF/derive_more) * [dlib](//github.com/elinorbgr/dlib) * [dyn-stack](//github.com/kitegi/dynstack/) * [envy](//github.com/softprops/envy) * [faer-entity](//github.com/sarah-ek/faer-rs/) * [faer-macros](//github.com/sarah-quinones/faer-rs/) * [faer-traits](//github.com/sarah-quinones/faer-rs/) * [faer](//github.com/sarah-ek/faer-rs/) * [ffmpeg-sidecar](//github.com/nathanbabcock/ffmpeg-sidecar) * [float-cmp](//github.com/mikedilger/float-cmp) * [foreign\_vec](//github.com/DataEngineeringLabs/foreign_vec) * [generic-array](//github.com/fizyk20/generic-array.git) * [h2](//github.com/hyperium/h2) * [http-body](//github.com/hyperium/http-body) * [hyper](//github.com/hyperium/hyper) * [image-webp](//github.com/image-rs/image-webp) * [image](//github.com/image-rs/image) * [imagesize](//github.com/Roughsketch/imagesize) * [imgref](//github.com/kornelski/imgref) * [indicatif](//github.com/console-rs/indicatif) * [infer](//github.com/bojand/infer) * [inflections](//docs.rs/inflections) * [is-terminal](//github.com/sunfishcode/is-terminal) * [isahc](//github.com/sagebind/isahc) * [loop9](//gitlab.com/kornelski/loop9.git) * [lru](//github.com/jeromefroe/lru-rs) * [lz4-sys](//github.com/10xGenomics/lz4-rs) * [lz4](//github.com/10xGenomics/lz4-rs) * [lz4\_flex](//github.com/pseitz/lz4_flex) * [macro\_rules\_attribute](//github.com/danielhenrymantilla/macro_rules_attribute-rs) * [macro\_rules\_attribute\_proc\_macro](//github.com/danielhenrymantilla/macro_rules_attribute-rs) * [matrixcompare-core](//github.com/Andlon/matrixcompare) * [maybe\_rayon](//github.com/shssoichiro/maybe-rayon) * [mcap](//github.com/foxglove/mcap) * [memchr](//github.com/BurntSushi/memchr) * [memoffset](//github.com/Gilnaa/memoffset) * [miette-derive](//github.com/zkat/miette) * [miette](//github.com/zkat/miette) * [mime\_guess](//github.com/abonander/mime_guess) * [mime\_guess2](//github.com/ttys3/mime_guess2) * [mio](//github.com/tokio-rs/mio) * [multi\_log](//github.com/davechallis/multi_log) * [natord](//github.com/lifthrasiir/rust-natord) * [new\_debug\_unreachable](//github.com/mbrubeck/rust-debug-unreachable) * [nix](//github.com/nix-rust/nix) * [nom](//github.com/Geal/nom) * [noop\_proc\_macro](//github.com/lu-zero/noop_proc_macro) * [number\_prefix](//github.com/ogham/rust-number-prefix) * [opencv](//github.com/twistedfall/opencv-rust) * [openssl-sys](//github.com/sfackler/rust-openssl) * [ordered-float](//github.com/reem/rust-ordered-float) * [owo-colors](//github.com/jam1garner/owo-colors) * [pcd-rs-derive](//github.com/jerry73204/pcd-rs) * [pcd-rs](//github.com/jerry73204/pcd-rs) * [peg-macros](//github.com/kevinmehall/rust-peg) * [peg-runtime](//github.com/kevinmehall/rust-peg) * [peg](//github.com/kevinmehall/rust-peg) * [phf](//github.com/rust-phf/rust-phf) * [phf\_generator](//github.com/rust-phf/rust-phf) * [phf\_macros](//github.com/rust-phf/rust-phf) * [phf\_shared](//github.com/rust-phf/rust-phf) * [pico-args](//github.com/RazrFalcon/pico-args) * [platform-dirs](//github.com/cjbassi/platform-dirs-rs) * [ply-rs](//github.com/Fluci/ply-rs.git) * [pulldown-cmark](//github.com/raphlinus/pulldown-cmark) * [quick-xml](//github.com/tafia/quick-xml) * [random\_word](//github.com/MitchellRhysHall/random_word) * [rctree](//github.com/RazrFalcon/rctree) * [regex-automata](//github.com/BurntSushi/regex-automata) * [resvg](//github.com/RazrFalcon/resvg) * [rfd](//github.com/PolyMeilex/rfd) * [rgb](//github.com/kornelski/rust-rgb) * [rmp-serde](//github.com/3Hren/msgpack-rust) * [rmp](//github.com/3Hren/msgpack-rust) * [same-file](//github.com/BurntSushi/same-file) * [schemars](//github.com/GREsau/schemars) * [schemars\_derive](//github.com/GREsau/schemars) * [serde-wasm-bindgen](//github.com/RReverser/serde-wasm-bindgen) * [simd-adler32](//github.com/mcountryman/simd-adler32) * [simd\_helpers](//github.com/lu-zero/simd_helpers) * [slab](//github.com/tokio-rs/slab) * [sluice](//github.com/sagebind/sluice) * [smawk](//github.com/mgeisler/smawk) * [smithay-client-toolkit](//github.com/smithay/client-toolkit) * [smithay-clipboard](//github.com/smithay/smithay-clipboard) * [space](//github.com/rust-cv/space) * [spinners](//github.com/fgribreau/spinners) * [strict-num](//github.com/RazrFalcon/strict-num) * [strsim](//github.com/dguo/strsim-rs) * [strum](//github.com/Peternator7/strum) * [strum\_macros](//github.com/Peternator7/strum) * [supports-color](//github.com/zkat/supports-color) * [supports-hyperlinks](//github.com/zkat/supports-hyperlinks) * [supports-unicode](//github.com/zkat/supports-unicode) * [sys-info](//github.com/FillZpp/sys-info-rs) * [sysinfo](//github.com/GuillaumeGomez/sysinfo) * [termcolor](//github.com/BurntSushi/termcolor) * [textplots](//github.com/loony-bean/textplots-rs) * [textwrap](//github.com/mgeisler/textwrap) * [tiff](//github.com/image-rs/image-tiff) * [tinystl](//github.com/lsh/tinystl) * [tobj](//github.com/Twinklebear/tobj) * [tokio-macros](//github.com/tokio-rs/tokio) * [tokio-stream](//github.com/tokio-rs/tokio) * [tokio-util](//github.com/tokio-rs/tokio) * [tokio](//github.com/tokio-rs/tokio) * [tower-service](//github.com/tower-rs/tower) * [tracing-attributes](//github.com/tokio-rs/tracing) * [tracing-core](//github.com/tokio-rs/tracing) * [tracing-futures](//github.com/tokio-rs/tracing) * [tracing](//github.com/tokio-rs/tracing) * [try-lock](//github.com/seanmonstar/try-lock) * [twox-hash](//github.com/shepmaster/twox-hash) * [unicode-linebreak](//github.com/axelf4/unicode-linebreak) * [unsafe-libyaml](//github.com/dtolnay/unsafe-libyaml) * [urlencoding](//github.com/kornelski/rust_urlencoding) * [usvg-parser](//github.com/RazrFalcon/resvg) * [usvg-tree](//github.com/RazrFalcon/resvg) * [usvg](//github.com/RazrFalcon/resvg) * [walkdir](//github.com/BurntSushi/walkdir) * [walkers](//github.com/podusowski/walkers) * [want](//github.com/seanmonstar/want) * [wayland-backend](//github.com/smithay/wayland-rs) * [wayland-client](//github.com/smithay/wayland-rs) * [wayland-csd-frame](//github.com/rust-windowing/wayland-csd-frame) * [wayland-cursor](//github.com/smithay/wayland-rs) * [wayland-protocols-plasma](//github.com/smithay/wayland-rs) * [wayland-protocols-wlr](//github.com/smithay/wayland-rs) * [wayland-protocols](//github.com/smithay/wayland-rs) * [wayland-scanner](//github.com/smithay/wayland-rs) * [wayland-sys](//github.com/smithay/wayland-rs) * [wildmatch](//github.com/becheran/wildmatch) * [winnow](//github.com/winnow-rs/winnow) * [x11-dl](//github.com/AltF02/x11-rs.git) * [xcursor](//github.com/esposm03/xcursor-rs) * [xdg-home](//github.com/zeenix/xdg-home) * [xkbcommon-dl](//github.com/rust-windowing/xkbcommon-dl) * [xml-rs](//github.com/kornelski/xml-rs) * [xmlwriter](//github.com/RazrFalcon/xmlwriter) * [yaserde](//github.com/media-io/yaserde) * [yaserde\_derive](//github.com/media-io/yaserde) * [zbus](//github.com/dbus2/zbus/) * [zbus\_macros](//github.com/dbus2/zbus/) * [zbus\_names](//github.com/dbus2/zbus/) * [zstd](//github.com/gyscos/zstd-rs) * [zvariant](//github.com/dbus2/zbus/) * [zvariant\_derive](//github.com/dbus2/zbus/) * [zvariant\_utils](//github.com/dbus2/zbus/) 📋 Original MIT License ``` Copyright (c) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### ISC License[​](#isc-license "Direct link to ISC License") 🔍 3rd party code licensed with ISC * [inotify-sys](//github.com/hannobraun/inotify-sys) * [inotify](//github.com/hannobraun/inotify) * [is\_ci](//github.com/zkat/is_ci) * [libloading](//github.com/nagisa/rust_libloading/) * [rustls-webpki](//github.com/rustls/webpki) * [untrusted](//github.com/briansmith/untrusted) 📋 Original ISC License ``` Copyright . Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHORS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ``` ### BSD-2-Clause License[​](#bsd-2-clause-license "Direct link to BSD-2-Clause License") 🔍 3rd party code licensed with BSD-2-Clause * [arrayref](//github.com/droundy/arrayref) * [av1-grain](//github.com/rust-av/av1-grain) * [http-cache-semantics](//github.com/kornelski/rusty-http-cache-semantics) * [rav1e](//github.com/xiph/rav1e/) * [re\_rav1d](//github.com/memorysafety/rav1d) * [v\_frame](//github.com/rust-av/v_frame) 📋 Original BSD-2-Clause License ``` BSD 2-Clause License Copyright (c) , All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ### BSD-3-Clause License[​](#bsd-3-clause-license "Direct link to BSD-3-Clause License") 🔍 3rd party code licensed with BSD-3-Clause * [AMD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [COLAMD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [alloc-no-stdlib](//github.com/dropbox/rust-alloc-no-stdlib) * [alloc-stdlib](//github.com/dropbox/rust-alloc-no-stdlib) * [avif-serialize](//github.com/kornelski/avif-serialize) * [brotli-decompressor](//github.com/dropbox/rust-brotli-decompressor) * [brotli](//github.com/dropbox/rust-brotli) * [cAMD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [cCOLAMD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [encoding\_rs](//github.com/hsivonen/encoding_rs) * [exr](//github.com/johannesvollmer/exrs) * [instant](//github.com/sebcrozet/instant) * [lebe](//github.com/johannesvollmer/lebe) * [nalgebra](//github.com/dimforge/nalgebra) * [never](//fuchsia.googlesource.com/fuchsia/+/master/garnet/lib/rust/never) * [ravif](//github.com/kornelski/cavif-rs) * [subtle](//github.com/dalek-cryptography/subtle) * [tiny-skia](//github.com/RazrFalcon/tiny-skia) * [tiny-skia-path](//github.com/RazrFalcon/tiny-skia/tree/master/path) 📋 Original BSD-3-Clause License ``` BSD 3-Clause License Copyright (c) , All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ### Zlib License[​](#zlib-license "Direct link to Zlib License") 🔍 3rd party code licensed with Zlib * [slotmap](//github.com/orlp/slotmap) 📋 Original Zlib License ``` The zlib/libpng License Copyright (c) This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. ``` ### MPL-2.0 License[​](#mpl-20-license "Direct link to MPL-2.0 License") 🔍 3rd party code licensed with MPL-2.0 * [colored](//github.com/mackwic/colored) * [indent](//git.sr.ht/~ilkecan/indent-rs) * [option-ext](//github.com/soc/option-ext.git) * [webpki-roots](//github.com/rustls/webpki-roots) 📋 Original MPL-2.0 License See [Mozilla's official page](//www.mozilla.org/en-US/MPL/2.0/) for information on this license. ### Boost Software License[​](#boost-software-license "Direct link to Boost Software License") 🔍 3rd party code licensed with BSL-1.0 * [xxhash-rust](//github.com/DoumanAsh/xxhash-rust) 📋 Original Boost Software License Boost Software License - Version 1.0 - August 17th, 2003 Permission is hereby granted, free of charge, to any person or organization obtaining a copy of the software and accompanying documentation covered by this license (the "Software") to use, reproduce, display, distribute, execute, and transmit the Software, and to prepare derivative works of the Software, and to permit third-parties to whom the Software is furnished to do so, all subject to the following: The copyright notices in the Software and this entire statement, including the above license grant, this restriction and the following disclaimer, must be included in all copies of the Software, in whole or in part, and all derivative works of the Software, unless such copies or derivative works are solely in the form of machine-executable object code generated by a source language processor. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### CC-1.0 License[​](#cc-10-license "Direct link to CC-1.0 License") 🔍 3rd party code licensed with CC0-1.0 * [hexf-parse](//github.com/lifthrasiir/hexf) * [notify](//github.com/notify-rs/notify) * [tiny-keccak](//github.com/null) * [to\_method](//github.com/whentze/to_method) 📋 Original CC0-1.0 License ``` Creative Commons Legal Code Editing CC0 1.0 Universal Official translations of this legal tool are available CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER. Statement of Purpose The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an "owner") of an original work of authorship and/or a database (each, a "Work"). Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works ("Commons") that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others. For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the "Affirmer"), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights. 1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights ("Copyright and Related Rights"). Copyright and Related Rights include, but are not limited to, the following: i.the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work; ii. moral rights retained by the original author(s) and/or performer(s); iii.publicity and privacy rights pertaining to a person's image or likeness depicted in a Work; iv.rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below; v.rights protecting the extraction, dissemination, use and reuse of data in a Work; vi.database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and vii.other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof. 2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer's Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer's heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer's express Statement of Purpose. 3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer's express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer's Copyright and Related Rights in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "License"). The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not (i) exercise any of his or her remaining Copyright and Related Rights in the Work or (ii) assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer's express Statement of Purpose. 4. Limitations and Disclaimers. a.No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document. b.Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law. c.Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person's Copyright and Related Rights in the Work. Further, Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work. d.Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work. ``` ### LGPL-2.1+ License[​](#lgpl-21-license "Direct link to LGPL-2.1+ License") 🔍 3rd Party Software Dynamically Linked under the LGPL-2.1+ The following software is dynamically linked to MetriCal under the LGPL-2.1+. * [CHOLMOD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [FFmpeg](//ffmpeg.org/) 📋 Original LGPL-2.1+ License Text ``` CHOLMOD/Check Module. Copyright (C) 2005-2023, Timothy A. Davis CHOLMOD is also available under other licenses; contact authors for details. http://suitesparse.com CHOLMOD/Cholesky module, Copyright (C) 2005-2023, Timothy A. Davis. CHOLMOD is also available under other licenses; contact authors for details. http://suitesparse.com This Module is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This Module is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this Module; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA ``` ### GPL with Section 7 GCC Exception[​](#gpl-with-section-7-gcc-exception "Direct link to GPL with Section 7 GCC Exception") 🔍 3rd party libraries dynamically linked with the GPL with Section 7 GCC Exception * `libgomp1` * `libstdc++v3` * `libgcc_s` * `libc` * `libm` These libraries were built and linked to the final version of MetriCal incidentally through the use of GCC for some of our dependencies (CHOLMOD, OpenCV). 📋 Original GPL with Section 7 GCC Exception License ``` This is the Debian GNU/Linux prepackaged version of the GNU compiler collection, containing Ada, C, C++, D, Fortran 95, Go, Objective-C, Objective-C++, and Modula-2 compilers, documentation, and support libraries. In addition, Debian provides the gm2 compiler, either in the same source package, or built from a separate same source package. Packaging is done by the Debian GCC Maintainers , with sources obtained from: ftp://gcc.gnu.org/pub/gcc/releases/ (for full releases) svn://gcc.gnu.org/svn/gcc/ (for prereleases) ftp://sourceware.org/pub/newlib/ (for newlib) git://git.savannah.gnu.org/gm2.git (for Modula-2) The current gcc-12 source package is taken from the git gcc-12-branch. Changes: See changelog.Debian.gz Debian splits the GNU Compiler Collection into packages for each language, library, and documentation as follows: Language Compiler package Library package Documentation --------------------------------------------------------------------------- Ada gnat-12 libgnat-12 gnat-12-doc C gcc-12 gcc-12-doc C++ g++-12 libstdc++6 libstdc++6-12-doc D gdc-12 Fortran 95 gfortran-12 libgfortran5 gfortran-12-doc Go gccgo-12 libgo0 Objective C gobjc-12 libobjc4 Objective C++ gobjc++-12 Modula-2 gm2-12 libgm2 For some language run-time libraries, Debian provides source files, development files, debugging symbols and libraries containing position- independent code in separate packages: Language Sources Development Debugging Position-Independent ------------------------------------------------------------------------------ C++ libstdc++6-12-dbg libstdc++6-12-pic D libphobos-12-dev Additional packages include: All languages: libgcc1, libgcc2, libgcc4 GCC intrinsics (platform-dependent) gcc-12-base Base files common to all compilers gcc-12-soft-float Software floating point (ARM only) gcc-12-source The sources with patches Ada: libgnat-util12-dev, libgnat-util12 GNAT version library C: cpp-12, cpp-12-doc GNU C Preprocessor libssp0-dev, libssp0 GCC stack smashing protection library libquadmath0 Math routines for the __float128 type fixincludes Fix non-ANSI header files C, C++ and Fortran 95: libgomp1-dev, libgomp1 GCC OpenMP (GOMP) support library libitm1-dev, libitm1 GNU Transactional Memory Library Biarch support: On some 64-bit platforms which can also run 32-bit code, Debian provides additional packages containing 32-bit versions of some libraries. These packages have names beginning with 'lib32' instead of 'lib', for example lib32stdc++6. Similarly, on some 32-bit platforms which can also run 64-bit code, Debian provides additional packages with names beginning with 'lib64' instead of 'lib'. These packages contain 64-bit versions of the libraries. (At this time, not all platforms and not all libraries support biarch.) The license terms for these lib32 or lib64 packages are identical to the ones for the lib packages. COPYRIGHT STATEMENTS AND LICENSING TERMS GCC is Copyright (C) 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019 Free Software Foundation, Inc. GCC is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. GCC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Files that have exception clauses are licensed under the terms of the GNU General Public License; either version 3, or (at your option) any later version. On Debian GNU/Linux systems, the complete text of the GNU General Public License is in `/usr/share/common-licenses/GPL', version 3 of this license in `/usr/share/common-licenses/GPL-3'. The following runtime libraries are licensed under the terms of the GNU General Public License (v3 or later) with version 3.1 of the GCC Runtime Library Exception (included in this file): - libgcc (libgcc/, gcc/libgcc2.[ch], gcc/unwind*, gcc/gthr*, gcc/coretypes.h, gcc/crtstuff.c, gcc/defaults.h, gcc/dwarf2.h, gcc/emults.c, gcc/gbl-ctors.h, gcc/gcov-io.h, gcc/libgcov.c, gcc/tsystem.h, gcc/typeclass.h). - libatomic - libdecnumber - libgomp - libitm - libssp - libstdc++-v3 - libobjc - libgfortran - The libgnat-12 Ada support library and libgnat-util12 library. - Various config files in gcc/config/ used in runtime libraries. - libvtv The libbacktrace library is licensed under the following terms: Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. (2) Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (3) The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The libsanitizer libraries (libasan, liblsan, libtsan, libubsan) are licensed under the following terms: Copyright (c) 2009-2019 by the LLVM contributors. All rights reserved. Developed by: LLVM Team University of Illinois at Urbana-Champaign http://llvm.org Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal with the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimers. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimers in the documentation and/or other materials provided with the distribution. * Neither the names of the LLVM Team, University of Illinois at Urbana-Champaign, nor the names of its contributors may be used to endorse or promote products derived from this Software without specific prior written permission. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE SOFTWARE. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The libffi library is licensed under the following terms: libffi - Copyright (c) 1996-2003 Red Hat, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ``Software''), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The documentation is licensed under the GNU Free Documentation License (v1.2). On Debian GNU/Linux systems, the complete text of this license is in `/usr/share/common-licenses/GFDL-1.2'. GCC RUNTIME LIBRARY EXCEPTION Version 3.1, 31 March 2009 Copyright (C) 2009 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This GCC Runtime Library Exception ("Exception") is an additional permission under section 7 of the GNU General Public License, version 3 ("GPLv3"). It applies to a given file (the "Runtime Library") that bears a notice placed by the copyright holder of the file stating that the file is governed by GPLv3 along with this Exception. When you use GCC to compile a program, GCC may combine portions of certain GCC header files and runtime libraries with the compiled program. The purpose of this Exception is to allow compilation of non-GPL (including proprietary) programs to use, in this way, the header files and runtime libraries covered by this Exception. 0. Definitions. A file is an "Independent Module" if it either requires the Runtime Library for execution after a Compilation Process, or makes use of an interface provided by the Runtime Library, but is not otherwise based on the Runtime Library. "GCC" means a version of the GNU Compiler Collection, with or without modifications, governed by version 3 (or a specified later version) of the GNU General Public License (GPL) with the option of using any subsequent versions published by the FSF. "GPL-compatible Software" is software whose conditions of propagation, modification and use would permit combination with GCC in accord with the license of GCC. "Target Code" refers to output from any compiler for a real or virtual target processor architecture, in executable form or suitable for input to an assembler, loader, linker and/or execution phase. Notwithstanding that, Target Code does not include data in any format that is used as a compiler intermediate representation, or used for producing a compiler intermediate representation. The "Compilation Process" transforms code entirely represented in non-intermediate languages designed for human-written code, and/or in Java Virtual Machine byte code, into Target Code. Thus, for example, use of source code generators and preprocessors need not be considered part of the Compilation Process, since the Compilation Process can be understood as starting with the output of the generators or preprocessors. A Compilation Process is "Eligible" if it is done using GCC, alone or with other GPL-compatible software, or if it is done without using any work based on GCC. For example, using non-GPL-compatible Software to optimize any GCC intermediate representations would not qualify as an Eligible Compilation Process. 1. Grant of Additional Permission. You have permission to propagate a work of Target Code formed by combining the Runtime Library with Independent Modules, even if such propagation would otherwise violate the terms of GPLv3, provided that all Target Code was generated by Eligible Compilation Processes. You may then convey such a combination under terms of your choice, consistent with the licensing of the Independent Modules. 2. No Weakening of GCC Copyleft. The availability of this Exception does not imply any general presumption that third-party software is unaffected by the copyleft requirements of the license of GCC. libquadmath/*.[hc]: Copyright (C) 2010 Free Software Foundation, Inc. Written by Francois-Xavier Coudert Written by Tobias Burnus This file is part of the libiberty library. Libiberty is free software; you can redistribute it and/or modify it under the terms of the GNU Library General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Libiberty is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Library General Public License for more details. libquadmath/math: atanq.c, expm1q.c, j0q.c, j1q.c, log1pq.c, logq.c: Copyright 2001 by Stephen L. Moshier This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. coshq.c, erfq.c, jnq.c, lgammaq.c, powq.c, roundq.c: Changes for 128-bit __float128 are Copyright (C) 2001 Stephen L. Moshier and are incorporated herein by permission of the author. The author reserves the right to distribute this material elsewhere under different copying permissions. These modifications are distributed here under the following terms: This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. ldexpq.c: * Conversion to long double by Ulrich Drepper, * Cygnus Support, drepper@cygnus.com. cosq_kernel.c, expq.c, sincos_table.c, sincosq.c, sincosq_kernel.c, sinq_kernel.c, truncq.c: Copyright (C) 1997, 1999 Free Software Foundation, Inc. The GNU C Library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. The GNU C Library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. isinfq.c: * Written by J.T. Conklin . * Change for long double by Jakub Jelinek * Public domain. llroundq.c, lroundq.c, tgammaq.c: Copyright (C) 1997, 1999, 2002, 2004 Free Software Foundation, Inc. This file is part of the GNU C Library. Contributed by Ulrich Drepper , 1997 and Jakub Jelinek , 1999. The GNU C Library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. The GNU C Library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. log10q.c: Cephes Math Library Release 2.2: January, 1991 Copyright 1984, 1991 by Stephen L. Moshier Adapted for glibc November, 2001 This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. remaining files: * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. * * Developed at SunPro, a Sun Microsystems, Inc. business. * Permission to use, copy, modify, and distribute this * software is freely granted, provided that this notice * is preserved. ``` ### Non-standard / or Multi-Licensed Licenses[​](#non-standard--or-multi-licensed-licenses "Direct link to Non-standard / or Multi-Licensed Licenses") Some packages contain software components that are multi-licensed, or are otherwise specially licensed. These are listed here. 📋 Ring crate The [ring](//github.com/briansmith/ring/blob/main/LICENSE) crate is used as a transient dependency in several packages above. The license for this code falls under several different agreements: ``` *ring* uses an "ISC" license, like BoringSSL used to use, for new code files. See LICENSE-other-bits for the text of that license. See LICENSE-BoringSSL for code that was sourced from BoringSSL under the Apache 2.0 license. Some code that was sourced from BoringSSL under the ISC license. In each case, the license info is at the top of the file. See src/polyfill/once_cell/LICENSE-APACHE and src/polyfill/once_cell/LICENSE-MIT for the license to code that was sourced from the once_cell project. ====ISC License==== Copyright 2015-2025 Brian Smith. Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ====LICENSE-other-bits==== Copyright 2015-2025 Brian Smith. Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ====LICENSE-BoringSSL==== Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ## Licenses for support code Parts of the TLS test suite are under the Go license. This code is not included in BoringSSL (i.e. libcrypto and libssl) when compiled, however, so distributing code linked against BoringSSL does not trigger this license: Copyright (c) 2009 The Go Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. BoringSSL uses the Chromium test infrastructure to run a continuous build, trybots etc. The scripts which manage this, and the script for generating build metadata, are under the Chromium license. Distributing code linked against BoringSSL does not trigger this license. Copyright 2015 The Chromium Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ====src/polyfill/once_cell/LICENSE-APACHE==== Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ====/src/polyfill/once_cell/LICENSE-MIT==== Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHOR OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ``` ``` --- # Multiple Targets ## Target Configuration[​](#target-configuration "Direct link to Target Configuration") MetriCal works with multiple calibration targets simultaneously. You don't need a specially measured room or precise target positioning—just a few targets visible to each sensor is sufficient. For most users, our [premade targets](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) combined with the target selection wizard will handle all technical details automatically. This approach provides you with both suitable targets and a correctly formatted object space file. Custom object space configuration is typically only necessary for specialized setups or non-standard calibration requirements. The sections below explain how to configure custom object spaces if your application requires it. ## Constructing the Object Space[​](#constructing-the-object-space "Direct link to Constructing the Object Space") ### Generating UUIDs[​](#generating-uuids "Direct link to Generating UUIDs") Each target should be assigned a different UUID (v4); this is how MetriCal keeps track of them, similar to how it tracks components. If you're writing your own object space file, you can generate a valid UUID using a site like [UUID Generator](https://www.uuidgenerator.net/) or, if you're on Linux, using the `uuid` command: ``` $ sudo apt install uuid $ uuid -v4 b5e4183c-d1ae-11ee-91e7-afd8bef1d15c # <--- a valid UUID ``` ### Format[​](#format "Direct link to Format") Below, we have an example of a JSON object that describes two markerboards, elegantly named `24e6df7b-b756-4b9c-a719-660d45d796bf` and `7324938d-de4e-4d36-a25b-fbd8e6102026`: ``` { "object_spaces": { "24e6df7b-b756-4b9c-a719-660d45d796bf": { "descriptor": { "variances": [0.00004, 0.00004, 0.0004] }, "detector": { "markerboard": { "checker_length": 0.1524, "corner_height": 5, "corner_width": 13, "marker_dictionary": "Aruco4x4_1000", "marker_id_offset": 0, "marker_length": 0.1 } } }, "7324938d-de4e-4d36-a25b-fbd8e6102026": { "descriptor": { "variances": [0.00004, 0.00004, 0.0004] }, "detector": { "markerboard": { "checker_length": 0.1524, "corner_height": 5, "corner_width": 13, "marker_dictionary": "Aruco4x4_1000", "marker_id_offset": 50, "marker_length": 0.1 } } } } } ``` ### Differentiating Targets[​](#differentiating-targets "Direct link to Differentiating Targets") #### Different Dictionaries[​](#different-dictionaries "Direct link to Different Dictionaries") Careful observers will note that each target in the example above has a different `marker_id_offset`. This indicates the lowest tag ID on the target, with the rest of the tags increasing sequentially. There should be no overlap in tag IDs between targets; all tags should be unique. If this assumption is violated, *horrible things will happen*... mainly, your calibration results may make absolutely no sense. This goes for different *dictionaries* as well. Counterintuitively, users have reported misdetections when using aruco markers with different dictionaries, but identical IDs. Bottom line: make sure that **all of your targets have unique IDs**, no matter what dictionary they use. #### Circle Target Radii[​](#circle-target-radii "Direct link to Circle Target Radii") When using multiple [Circle targets](/metrical/targets/target_overview.md), MetriCal requires that the radii of the circles be at least **10cm different** from each other. If the radii are too similar, the calibration may fail or produce incorrect results. ### Object Relative Extrinsics (OREs)[​](#object-relative-extrinsics-ores "Direct link to Object Relative Extrinsics (OREs)") Since MetriCal runs a full optimization over all components and object spaces, it naturally derives extrinsics between everything as well. You'll find that the `spatial_constraints` field in the JSON will be populated with the extrinsics between all targets post-calibration. Just like any other data source, more object spaces mean more OREs; more OREs will add time to the optimization. It's just more to solve for! If you're not interested in surveying the extrinsics between object spaces, and are just worried about the component-side calibration, we recommend setting `--disable-ore-inference`: ``` metrical calibrate --disable-ore-inference ... ``` It's important to note that this flag's setting shouldn't dramatically change the component-side calibration. The only meaningful difference is that your results will report spatial constraints for all objects and components. ## Mutual Construction Groups[​](#mutual-construction-groups "Direct link to Mutual Construction Groups") Some targets are designed to be used together. For example, our LiDAR circle target is a single piece of hardware, but to MetriCal, it's actually two targets: a markerboard and a circle. We indicate this by assigning them to the same *mutual construction group*: ``` { // The markerboard and circle targets "object_spaces": { "24e6df7b-b756-4b9c-a719-660d45d796bf": { ... "detector": { "markerboard": { ... } } }, "34e6df7b-b756-4b9c-a719-660d45d796bf": { ... "detector": { "circle": { ... } } } }, // Our mutual construction group, indicating these are one and the same "mutual_construction_groups": [ ["24e6df7b-b756-4b9c-a719-660d45d796bf", "34e6df7b-b756-4b9c-a719-660d45d796bf"] ] } ``` Currently, our LiDAR board is the only target that uses this feature, but it's a good example of how MetriCal is designed to be extensible and future-proof. --- # Target Construction Guide This guide consolidates information on constructing calibration targets for use with MetriCal, focusing specifically on target construction regardless of sensor modality or purpose. Tangram Vision Target Repository Access Tangram Vision's [target repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) for easy-to-construct target templates. ## Target Selection and Generation[​](#target-selection-and-generation "Direct link to Target Selection and Generation") ### Using Pre-made Targets (Recommended)[​](#using-pre-made-targets-recommended "Direct link to Using Pre-made Targets (Recommended)") When using MetriCal, we generally recommend choosing from the premade targets in our [target repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets). Though it is possible to design your own targets with other tools, or repurpose targets that you may have used with other calibration software, using prebuilt targets helps prevent many common issues. All targets in the premade target repository come with the object space file that was used to generate them, so they're guaranteed to be accurate. Additionally, the target repository includes a target selection wizard that helps you automatically select targets that are valid in combination with one another. #### Target Selection Wizard[​](#target-selection-wizard "Direct link to Target Selection Wizard") To use the target selection wizard: 1. You'll need a system with Python 3 installed 2. Download the [target repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) 3. Run `python3 target_selection_wizard.py ` 4. Follow the onscreen instructions 5. When complete, the wizard will output PDFs of all chosen targets to the chosen output directory, as well as an object space json file that can be used directly with MetriCal ### Target Type Considerations[​](#target-type-considerations "Direct link to Target Type Considerations") When selecting target types, consider these factors: 1. **Target Type**: * **AprilGrid-style targets** have higher feature density than Markerboards, which improves calibration quality. However, our AprilGrid detector is less robust than the Markerboard one, and may be less consistent in certain scenarios. We generally recommend trying AprilGrids first and then switching to Markerboard if your detections look sporadic. * **Markerboard (ChArUco)** targets might be preferred if you need the larger tag ID dictionary or to fit into an existing calibration pipeline 2. **Target Size**: * Choose the largest target size that you can practically use * Bigger targets are generally more advantageous for calibration quality * They can be seen from farther away and by more cameras at once * Once the target is big enough to exceed the FOV of your camera at the desired capture distance, there is much less advantage to increasing size further 3. **Marker Density**: * Higher marker density creates more features for calibration but risks detection failures at higher distances or lower resolutions * "Standard" density targets are a good default option * "Sparse" density may be appropriate if using targets very far from your camera or with low-resolution cameras * "Dense" density is suitable for close-range, high-resolution captures * It's better to err on the side of more sparse - getting any detections is more important than maximizing the number of detections * Aim for markers to always appear larger than 20px in your captured images 4. **Lidar Compatibility**: * Lidar targets require a retroreflective ring that makes them detectable * If using multiple lidar targets, the radius of each retroreflective circle must differ by at least 10cm * You can mix lidar and non-lidar targets in your setup 5. **Marker ID Offsets**: * Each target needs completely unique marker IDs * The selection wizard handles this automatically * If designing your own targets, you must ensure no ID overlap ## Printing and Assembly[​](#printing-and-assembly "Direct link to Printing and Assembly") ### Printing Specifications[​](#printing-specifications "Direct link to Printing Specifications") 1. Find a print shop that can print targets of your chosen sizes * For US-based customers, [uprinting](https://www.uprinting.com/) has been proven reliable * Recommended printing substrates: * Foam board (lighter, better for carrying during object-motion captures) * Aluminum (more durable and easier to mount) 2. **CRITICAL**: Request that the print shop *center the PDFs rather than scaling them at all* * Maintaining the exact scale of the final printed markers is extremely important for calibration accuracy ### Assembling Lidar Targets[​](#assembling-lidar-targets "Direct link to Assembling Lidar Targets") If you printed lidar targets, you'll need to apply retroreflectors to the printed yellow circle as a separate step: 1. Use any retroreflective tape cut into small segments 2. Apply carefully such that the yellow circle is covered as evenly as possible 3. For premade lidar targets, Tangram offers precut, exactly-sized retroreflective stickers that are easier to apply consistently * Contact for more details ## Verification and Final Steps[​](#verification-and-final-steps "Direct link to Verification and Final Steps") ### Verify Measurements[​](#verify-measurements "Direct link to Verify Measurements") After printing, measure your target to verify that it is correctly scaled: 1. All premade targets have size info printed in the top left corner 2. For AprilGrid targets: * Measure one of the markers * Ensure it matches the `marker_length` field 3. For Markerboards: * Measure one of the black squares * Ensure it matches the `checker_length` ### Target Placement Considerations[​](#target-placement-considerations "Direct link to Target Placement Considerations") When using multiple targets: 1. Ensure their relative transforms remain constant for the duration of the calibration sequence 2. Consider how to practically mount each target in a multi-target scenario 3. Place targets such that there are times when multiple targets can be observed by the same sensor ## Troubleshooting Target Issues[​](#troubleshooting-target-issues "Direct link to Troubleshooting Target Issues") If you encounter the "No Features Detected" error ([cal-calibrate-001](/metrical/commands/command_errors.md)), check the following: * The measurements of your targets should be in meters * The dictionary used for your boards, if applicable (e.g., a 4×4 dictionary has targets made up of 4×4 squares in the inner pattern) * For Markerboards, verify your `initial_corner` is set to the correct variant ('Marker' or 'Square') * Make sure you can see as much of the board as possible when collecting your data * Try adjusting your Camera or LiDAR filter settings to ensure detections are not filtered out ## Custom Target Solutions[​](#custom-target-solutions "Direct link to Custom Target Solutions") If you have unique constraints and feel that the premade targets are insufficient, contact . Tangram Vision has designed and implemented custom target detectors for clients in the past and can help with special requests. --- # Supported Targets MetriCal supports a large variety of traditional calibration targets, as well as a few of our own design. You can even use multiple targets at once, without having to do much of anything! Learn how to use multiple targets at once here: [Using Multiple Targets](/metrical/targets/multiple_targets.md). Targets Find examples for AprilGrid, Markerboard, and Lidar targets in the [MetriCal Premade Targets repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) on GitLab. * AprilGrid * Markerboard * Circle Target * SquareMarkers * DotMarkers * Checkerboard ## AprilGrid[​](#aprilgrid "Direct link to AprilGrid") AprilGrids are patterned sets of Apriltags. They have contrasting squares in the corner of every tag; this provides feature detection algorithms more information to derive corner locations. ![Target: AprilGrid](/assets/images/fiducial_aprilgrid_desc-e6f231dc023ebbca7e9cf12bc6dcb959.png) | Field | Type | Description | | -------------------- | ------- | -------------------------------------------------------------------------------------------------------------- | | `marker_dictionary` | string | The marker dictionary used on this target. See Supported Marker Dictionaries below. | | `marker_grid_width` | float | Number of AprilTags / markers horizontally on the board | | `marker_grid_height` | float | Number of AprilTags / markers vertically on the board | | `marker_length` | float | The length of one edge of the AprilTags in the board, in meters | | `tag_spacing` | boolean | The space between the tags in fraction of the edge size \[0.0, 1.0] | | `marker_id_offset` | integer | (Optional) Lowest marker ID present in the board. This will offset all expected marker values. Default is `0`. | *** ### Supported Marker Dictionaries[​](#supported-marker-dictionaries "Direct link to Supported Marker Dictionaries") | Value | Description | | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | "Apriltag16h5" | 4x4 bit Apriltag containing 20 markers. Minimum hamming distance between any two codes is 5. | | "Apriltag25h9" | 5x5 bit Apriltag containing 35 markers. Minimum hamming distance between any two codes is 9. | | "Apriltag36h11" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. | | "ApriltagKalibr" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. Every marker has a border width of 2; this is the only main differentiator between Kalibr and other Apriltag types. | ## Markerboard[​](#markerboard "Direct link to Markerboard") Markerboards are similar to checkerboards, but contain a series of coded markers in the empty spaces of the checkerboard. These codes are most often in the form of April or ArUco tags, which allow for better identification and isolation of features. ![Target: Markerboard](/assets/images/fiducial_markerboard_desc-9d391ff057102f5f07c59cda4168ffdb.png) | Field | Type | Description | | ------------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `checker_length` | float | The length of the side of a checker (a solid square), in meters | | `corner_height` | float | The number of inner corners on the board, counted vertically. This is one less than the number of columns on the board | | `corner_width` | float | The number of inner corners on the board, counted horizontally. This is one less than the number of rows on the board | | `marker_dictionary` | string | The marker dictionary used on this target. See Supported Marker Dictionaries below. | | `marker_length` | float | The length of the side of a marker, in meters | | `marker_id_offset` | integer | (Optional) Lowest marker ID present in the board. This will offset all expected marker values. Default is `0`. | | `initial_corner` | string | (Optional) Valid values are "square" and "marker". Default is "marker". Used to counteract a [breaking change](https://github.com/opencv/opencv/issues/23152) to markerboards generated by newer versions of OpenCV. This option specifies if the origin corner of your board is populated with a solid black square or a marker. In boards generated by OpenCV >=4.6.0, the origin will always be a black square. In older versions, the origin can sometimes be a marker. | *** ### Supported Marker Dictionaries[​](#supported-marker-dictionaries "Direct link to Supported Marker Dictionaries") | Value | Description | | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | "Aruco4x4\_50" | 4x4 bit Aruco containing 50 markers. | | "Aruco4x4\_100" | 4x4 bit Aruco containing 100 markers. | | "Aruco4x4\_250" | 4x4 bit Aruco containing 250 markers. | | "Aruco4x4\_1000" | 4x4 bit Aruco containing 1000 markers. | | "Aruco5x5\_50" | 5x5 bit Aruco containing 50 markers. | | "Aruco5x5\_100" | 5x5 bit Aruco containing 100 markers. | | "Aruco5x5\_250" | 5x5 bit Aruco containing 250 markers. | | "Aruco5x5\_1000" | 5x5 bit Aruco containing 1000 markers. | | "Aruco6x6\_50" | 6x6 bit Aruco containing 50 markers. | | "Aruco6x6\_100" | 6x6 bit Aruco containing 100 markers. | | "Aruco6x6\_250" | 6x6 bit Aruco containing 250 markers. | | "Aruco6x6\_1000" | 6x6 bit Aruco containing 1000 markers. | | "Aruco7x7\_50" | 7x7 bit Aruco containing 50 markers. | | "Aruco7x7\_100" | 7x7 bit Aruco containing 100 markers. | | "Aruco7x7\_250" | 7x7 bit Aruco containing 250 markers. | | "Aruco7x7\_1000" | 7x7 bit Aruco containing 1000 markers. | | "ArucoOriginal" | 5x5 bit Aruco containing the original generated marker library. | | "Apriltag16h5" | 4x4 bit Apriltag containing 20 markers. Minimum hamming distance between any two codes is 5. | | "Apriltag25h9" | 5x5 bit Apriltag containing 35 markers. Minimum hamming distance between any two codes is 9. | | "Apriltag36h10" | 6x6 bit Apriltag containing 2320 markers. Minimum hamming distance between any two codes is 10. | | "Apriltag36h11" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. | | "ApriltagKalibr" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. Every marker has a border width of 2; this is the only main differentiator between Kalibr and other Apriltag types. | ## Lidar Circle Target[​](#lidar-circle-target "Direct link to Lidar Circle Target") This target is just a markerboard with a retroreflective ring on its surface. This target type is required to perform a Camera ↔ LiDAR or LiDAR ↔ LiDAR calibration with MetriCal. This target is represented in object space as two separate object spaces: one for the markerboard and one for the circle. The description below is only for the circle. See the documentation on using [mutual construction groups](/metrical/targets/multiple_targets.md#mutual-construction-groups) for more information. Using Multiple Circle Targets When using multiple circle targets, MetriCal requires that the radii of the circles be at least **10cm different** from each other. If the radii are too similar, the calibration may fail or produce incorrect results. ![Target: Circle Target](/assets/images/circle_plane_construction-bc071c3a36cdcef10afa5503722bffaa.png) | Field | Type | Description | | ------------------------ | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `radius` | float | The radius of the circle, in meters. Measured to the circle's outer edge. | | `x_offset` | float | The horizontal distance between the [origin](#origin) of the markerboard and the center of the circle, in meters | | `y_offset` | float | The vertical distance between the [origin](#origin) of the markerboard and the center of the circle, in meters | | `z_offset` | float | (Optional) The depth between the \[origin]\(#origin of the markerboard and the center of the circle, in meters. Typically this should not be explicitly set since the circle is in-plane with the rest of the target. However, there are some cases where the detected circle is consistently offset from the board and this can be used to model some of that error. | | `detect_interior_points` | boolean | Whether to use the LiDAR points detected within the circle bounds as part of the optimization. Doing so will produce [Interior Points to Plane Metrics](/metrical/results/residual_metrics/interior_points_to_plane_error.md) | | `reflective_tape_width` | float | (Optional) The width of the circular retroreflective tape, in meters. Used as a hint during circle detection. Defaults to 5cm | ### "Origin"?[​](#origin "Direct link to \"Origin\"?") Notice that `x_offset` and `y_offset` are relative to the "origin" of the markerboard pattern. The way that this is defined differs across board types, and can be a bit confusing. If you are following our [target construction guidelines](/metrical/targets/target_construction.md) and are using a Tangram premade target, we have already calculated offsets from the proper origin. If you are constructing your own custom circle target, you will need to find the origin of your board in order to measure circle offsets yourself. To find the origin of an **AprilGrid** style target, first find the tag matching the `marker_id_offset` of the board. Orient this tag so that it's in the **bottom-left corner of your frame of reference**. The origin of the board is the **top left corner** of the **top left marker** of your board (see diagram). ![Target: Circular AprilGrid Description](/assets/images/fiducial_circle_desc_aprilgrid-b4ad7b1694e1bda235015e236a91d3de.png) To find the origin of a **Markerboard** style target, first find the tag matching the `marker_id_offset` of the board. Orient this tag so that it's in the **top-left corner of your frame of reference**. The origin of the board is the **bottom right corner** of the **top left checker** of your board (see diagram). Note that the top left checker of your board may be either a black square or a white checker with a tag inside of it. If it's the latter, the origin is the corner of the checker rather than the tag itself. ![Target: Circle Target Description](/assets/images/fiducial_circle_desc-1a709db6b954b1f4358ee15882eb2fc9.png) ## SquareMarkers[​](#squaremarkers "Direct link to SquareMarkers") "SquareMarkers" is a general catch-all for a collection of signalized markers, e.g. a calibration space made up of many unconnected ArUco or April tags. ![Target: Markers](/assets/images/fiducial_markers_desc-6b4aff646787fa83efa4b5202a5998a0.png) | Field | Type | Description | | ------------------- | ---------------- | ----------------------------------------------------------------------------------- | | `marker_dictionary` | string | The marker dictionary used on this target. See Supported Marker Dictionaries below. | | `marker_ids` | list of integers | The marker IDs present in this set of SquareMarkers. | *** ### Supported Marker Dictionaries[​](#supported-marker-dictionaries "Direct link to Supported Marker Dictionaries") | Value | Description | | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | "Aruco4x4\_50" | 4x4 bit Aruco containing 50 markers. | | "Aruco4x4\_100" | 4x4 bit Aruco containing 100 markers. | | "Aruco4x4\_250" | 4x4 bit Aruco containing 250 markers. | | "Aruco4x4\_1000" | 4x4 bit Aruco containing 1000 markers. | | "Aruco5x5\_50" | 5x5 bit Aruco containing 50 markers. | | "Aruco5x5\_100" | 5x5 bit Aruco containing 100 markers. | | "Aruco5x5\_250" | 5x5 bit Aruco containing 250 markers. | | "Aruco5x5\_1000" | 5x5 bit Aruco containing 1000 markers. | | "Aruco6x6\_50" | 6x6 bit Aruco containing 50 markers. | | "Aruco6x6\_100" | 6x6 bit Aruco containing 100 markers. | | "Aruco6x6\_250" | 6x6 bit Aruco containing 250 markers. | | "Aruco6x6\_1000" | 6x6 bit Aruco containing 1000 markers. | | "Aruco7x7\_50" | 7x7 bit Aruco containing 50 markers. | | "Aruco7x7\_100" | 7x7 bit Aruco containing 100 markers. | | "Aruco7x7\_250" | 7x7 bit Aruco containing 250 markers. | | "Aruco7x7\_1000" | 7x7 bit Aruco containing 1000 markers. | | "ArucoOriginal" | 5x5 bit Aruco containing the original generated marker library. | | "Apriltag16h5" | 4x4 bit Apriltag containing 20 markers. Minimum hamming distance between any two codes is 5. | | "Apriltag25h9" | 5x5 bit Apriltag containing 35 markers. Minimum hamming distance between any two codes is 9. | | "Apriltag36h10" | 6x6 bit Apriltag containing 2320 markers. Minimum hamming distance between any two codes is 10. | | "Apriltag36h11" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. | | "ApriltagKalibr" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. Every marker has a border width of 2; this is the only main differentiator between Kalibr and other Apriltag types. | ## DotMarkers[​](#dotmarkers "Direct link to DotMarkers") Under Development This target type is still under development. Please check back later for more information. DotMarkers are the same as SquareMarkers, but all of the black and white squares within the tags are replaced with circles! In addition to being a fun name, DotMarkers allow MetriCal to use circles while also preserving the code information of Aruco or April tags. ![Target: DotMarker](/assets/images/fiducial_dotmarker-b03e82d9b11bdfeca55c21bddfb942c8.png) | Field | Type | Description | | ------------------- | ---------------- | ----------------------------------------------------------------------------------- | | `marker_dictionary` | string | The marker dictionary used on this target. See Supported Marker Dictionaries below. | | `marker_ids` | list of integers | The marker IDs present in this set of SquareMarkers. | *** ### Supported Marker Dictionaries[​](#supported-marker-dictionaries "Direct link to Supported Marker Dictionaries") | Value | Description | | ---------------- | --------------------------------------------------------------- | | "Aruco4x4\_50" | 4x4 bit Aruco containing 50 markers. | | "Aruco4x4\_100" | 4x4 bit Aruco containing 100 markers. | | "Aruco4x4\_250" | 4x4 bit Aruco containing 250 markers. | | "Aruco4x4\_1000" | 4x4 bit Aruco containing 1000 markers. | | "Aruco5x5\_50" | 5x5 bit Aruco containing 50 markers. | | "Aruco5x5\_100" | 5x5 bit Aruco containing 100 markers. | | "Aruco5x5\_250" | 5x5 bit Aruco containing 250 markers. | | "Aruco5x5\_1000" | 5x5 bit Aruco containing 1000 markers. | | "Aruco6x6\_50" | 6x6 bit Aruco containing 50 markers. | | "Aruco6x6\_100" | 6x6 bit Aruco containing 100 markers. | | "Aruco6x6\_250" | 6x6 bit Aruco containing 250 markers. | | "Aruco6x6\_1000" | 6x6 bit Aruco containing 1000 markers. | | "Aruco7x7\_50" | 7x7 bit Aruco containing 50 markers. | | "Aruco7x7\_100" | 7x7 bit Aruco containing 100 markers. | | "Aruco7x7\_250" | 7x7 bit Aruco containing 250 markers. | | "Aruco7x7\_1000" | 7x7 bit Aruco containing 1000 markers. | | "ArucoOriginal" | 5x5 bit Aruco containing the original generated marker library. | ## Checkerboard[​](#checkerboard "Direct link to Checkerboard") Limitations of Checkerboards While MetriCal supports checkerboards, it is important to note some limitations: * Points on the checkerboard are ambiguous. No calibration system can reliably tell the difference between a checkerboard rotated 180° and one that is not rotated at all. The same applies between rotations of 90° and 270°. This ambiguity means that MetriCal cannot reliably differentiate extrinsics, which causes [projective compensations](/metrical/core_concepts/projective_compensation.md). * The entire checkerboard needs to be visible in the field-of-view of the camera. With coded targets or asymmetric patterns, MetriCal can still identify key features without the full target in view. We recommend using coded detectors like the Markerboard whenever possible. This allows MetriCal to be more flexible to different data collection practices, and reduces the burden on you to keep the entire object space in frame at all times. Anyone who has ever tried their hand at calibration is familiar with the checkerboard. This is a flat, contrasting pattern of squares with known dimensionality. It's known for its ease of creation and flexibility in use. ![Target: Checkerboard](/assets/images/fiducial_checkerboard_desc-be489e805ee883d971d5df784d01145d.png) | Field | Type | Description | | ---------------- | ----- | ---------------------------------------------------------------------------------------------------------------------- | | `checker_length` | float | The length of the side of a checker (a solid square), in meters | | `corner_height` | float | The number of inner corners on the board, counted vertically. This is one less than the number of columns on the board | | `corner_width` | float | The number of inner corners on the board, counted horizontally. This is one less than the number of rows on the board | ---