# Tangram Vision Documentation > Use MetriCal for accurate, precise, and expedient calibration of multimodal sensor suites ## metrical ### calibration_guides - [Camera ↔ IMU Calibration Guide](/metrical/calibration_guides/camera_imu_cal.md): This file explains IMU calibration with MetriCal, which requires no specific targets since IMUs measure forces and velocities directly, and must be performed alongside camera calibration. - [Camera ↔ LiDAR Calibration Guide](/metrical/calibration_guides/camera_lidar_cal.md): This file provides a comprehensive guide for calibrating camera-LiDAR sensor pairs using MetriCal, including both combined and staged calibration approaches with circular targets. - [Calibration Guide Overview](/metrical/calibration_guides/guide_overview.md): This file provides general calibration principles and guidelines that apply to all MetriCal calibration scenarios, including target selection, data quality considerations, and sensor-specific guidance. - [LiDAR ↔ LiDAR Calibration Guide](/metrical/calibration_guides/lidar_lidar_cal.md): This file provides a guide for calibrating multiple LiDAR sensors together using MetriCal with circle targets and includes a practical example dataset. - [Camera ↔ LNS Guide](/metrical/calibration_guides/local_navigation_cal.md): This file provides a guide for for calibrating local navigation systems using MetriCal and includes a practical example dataset. - [Multi-Camera Calibration Guide](/metrical/calibration_guides/multi_camera_cal.md): This file provides guidance for multi-camera calibration in MetriCal, covering both combined and staged approaches for calibrating camera intrinsics and extrinsics simultaneously. - [Narrow Field-of-View Camera Calibration](/metrical/calibration_guides/narrow_fov_cal.md): This file provides an outline of the procedure of calibrating narrow field-of-view cameras with multiple target boards using metrical - [Single Camera Calibration Guide](/metrical/calibration_guides/single_camera_cal.md): This file provides tips and best practices for single camera calibration using MetriCal, including basic workflows and techniques for seeding larger multi-stage calibrations. ### calibration_models - [Camera Models](/metrical/calibration_models/cameras.md): This file documents all supported camera intrinsics models in MetriCal, including pinhole, distortion models, and fisheye lenses with their respective parameters and mathematical descriptions. - [IMU Models](/metrical/calibration_models/imu.md): This file documents all supported IMU intrinsics models in MetriCal for calibrating accelerometer and gyroscope measurements with their mathematical modeling and correction processes. - [LiDAR Models](/metrical/calibration_models/lidar.md): This file documents the supported LiDAR models in MetriCal, which currently focuses on extrinsics calibration since LiDAR intrinsics are typically reliable from the factory. - [Local Navigation System Models](/metrical/calibration_models/local_navigation.md): This file documents the local navigation systems calibration model in MetriCal. ### changelog This file contains the comprehensive release notes and changelogs for MetriCal versions, documenting new features, bug fixes, breaking changes, and version-specific improvements. - [Releases + Changelogs](/metrical/changelog.md): This file contains the comprehensive release notes and changelogs for MetriCal versions, documenting new features, bug fixes, breaking changes, and version-specific improvements. ### commands - [MetriCal Command: Calibrate](/metrical/commands/calibration/calibrate.md): This file documents the MetriCal calibrate command, which performs full bundle adjustment calibration of multi-modal sensor systems using pre-recorded data, plex configuration, and object space definitions. - [MetriCal Command: Consolidate](/metrical/commands/calibration/consolidate.md): This file documents the MetriCal consolidate object spaces command, which combines multiple object spaces into a single unified configuration using object relative extrinsics. - [MetriCal Command: Init](/metrical/commands/calibration/init.md): This file documents the MetriCal init command, which creates uncalibrated plex files from datasets by inferring components, spatial constraints, and temporal constraints. - [Focus](/metrical/commands/calibration/shape/shape_focus.md): This file documents the MetriCal shape focus command, which creates hub-and-spoke plex configurations where all components are spatially connected to one focus component. - [Lookup Table](/metrical/commands/calibration/shape/shape_lut.md): This file documents the MetriCal shape LUT command, which creates pixel-wise lookup tables for single camera distortion correction and image undistortion. - [Stereo Lookup Table](/metrical/commands/calibration/shape/shape_lut_stereo.md): This file documents the MetriCal shape stereo LUT command, which creates pixel-wise lookup tables for stereo camera pair rectification and stereo vision applications. - [Minimum Spanning Tree](/metrical/commands/calibration/shape/shape_mst.md): This file documents the MetriCal shape MST command, which creates a plex containing only the minimum spanning tree of spatial components with optimal covariance connections. - [Shape Mode](/metrical/commands/calibration/shape/shape_overview.md): This file provides an overview of MetriCal's shape command, which transforms plex files into various useful output formats for practical deployment and system integration. - [Tabular](/metrical/commands/calibration/shape/shape_tabular.md): This file documents the MetriCal shape tabular command, which exports intrinsic and extrinsic calibration data from plex files in simplified table-based formats. - [URDF](/metrical/commands/calibration/shape/shape_urdf.md): This file documents the MetriCal shape URDF command, which creates ROS-compatible URDF files from plex configurations for robotic system integration. - [MetriCal Command: Completion](/metrical/commands/cli_utilities/completion.md): This file describes how to generate shell auto-completion scripts for MetriCal across different shell environments including bash, fish, elvish, powershell, and zsh. - [MetriCal Command: Display](/metrical/commands/cli_utilities/display.md): This file documents the MetriCal display command, which visualizes calibration results applied to datasets using Rerun for ocular validation of calibration quality. - [MetriCal Command: Report](/metrical/commands/cli_utilities/report.md): This file documents the MetriCal report command, which generates comprehensive calibration reports from plex files or calibration output files in human-readable formats. - [Errors & Troubleshooting](/metrical/commands/command_errors.md): This file catalogs MetriCal's error codes and exit codes with descriptions and troubleshooting steps for various operational failures and system issues. - [MetriCal Commands](/metrical/commands/commands_overview.md): This file provides an overview of all MetriCal commands including init, calibrate, report, display, shape, and their universal options and usage patterns. - [Global Arguments](/metrical/commands/global_arguments.md): This file provides an overview of the global arguments available in MetriCal commands. - [Manifest Overview](/metrical/commands/manifest/manifest_overview.md): This file explains what a manifest is and how it is used in MetriCal. - [MetriCal Command: New](/metrical/commands/manifest/new.md): This file describes how to create a new MetriCal manifest with default configuration. - [MetriCal Command: Run](/metrical/commands/manifest/run.md): This file describes how to run a MetriCal manifest. - [MetriCal Command: Validate](/metrical/commands/manifest/validate.md): This file describes how to validate a MetriCal manifest for correctness. ### configuration - [Valid Data Formats](/metrical/configuration/data_formats.md): This file documents the supported data formats in MetriCal including MCAP files, ROS bags, folder datasets, and their respective message encodings and requirements. - [Group Management](/metrical/configuration/groups.md): This file explains group management in the Tangram Vision Hub, including how to name groups, add members, assign roles, and manage organizational permissions for MetriCal usage. - [Installation](/metrical/configuration/installation.md): This file provides installation instructions for MetriCal via apt repository for Ubuntu/Pop!_OS systems and via Docker for other operating systems. - [License Creation](/metrical/configuration/license_creation.md): This file explains how to create personal and group licenses for MetriCal through the Tangram Vision Hub, including license naming, creation, and revocation procedures. - [License Usage](/metrical/configuration/license_usage.md): This file provides comprehensive instructions for using MetriCal license keys, including command line arguments, environment variables, configuration files, and offline licensing options. - [Visualization](/metrical/configuration/visualization.md): This file explains how to set up and use MetriCal's visualization features with Rerun for visual inspection of calibration data, detections, and spatial alignment verification. ### core_concepts - [Components](/metrical/core_concepts/components.md): This file defines components as atomic sensing units in MetriCal's Plex system, detailing camera, LiDAR, IMU, and LNS component types with their intrinsic parameters and modeling approaches. - [Constraints](/metrical/core_concepts/constraints.md): This file defines constraints as spatial relations, temporal relations, or formations between components in MetriCal's Plex system, explaining directional conventions and extrinsic transformations. - [Covariance](/metrical/core_concepts/covariance.md): This file explains covariance as a measure of uncertainty in MetriCal's Plex system, differentiating it from binary fixed/variable parameters and providing statistical modeling capabilities. - [Object Space](/metrical/core_concepts/object_space_overview.md): This file explains MetriCal's Object Space concept, which refers to known sets of features in the environment used for cross-modality calibration and establishing metric scale. - [Plex](/metrical/core_concepts/plex_overview.md): This file explains MetriCal's Plex concept, which represents the spatia relations, temporal relations, and formations within perception systems as a graph of components and constraints. - [Projective Compensation](/metrical/core_concepts/projective_compensation.md): This file explains projective compensation, a phenomenon where errors in one calibration parameter affect other parameters due to correlation in observations or poor model choices. ### intro This file introduces MetriCal, a tool for accurate and precise calibration of multimodal sensor suites with support for cameras, LiDAR, IMU sensors, and various calibration targets. - [Welcome to MetriCal](/metrical/intro.md): This file introduces MetriCal, a tool for accurate and precise calibration of multimodal sensor suites with support for cameras, LiDAR, IMU sensors, and various calibration targets. ### results - [MetriCal Results MCAP](/metrical/results/output_file.md): This file documents MetriCal's MCAP output format containing optimized plex data, object space information, and detailed calibration metrics. - [Binned Feature Counts](/metrical/results/pre_calibration_metrics/binned_feature_counts.md): Documents the binned feature counts pre-calibration metric - [Circle Coverage](/metrical/results/pre_calibration_metrics/circle_coverage.md): Documents the circle coverage pre-calibration metric - [Topic Filter Statistics](/metrical/results/pre_calibration_metrics/topic_filter_statistics.md): Documents the topic filter statistics pre-calibration metric - [MetriCal Reports](/metrical/results/report.md): This file explains MetriCal's comprehensive calibration reports, including charts, diagnostics, metrics interpretation, and HTML report generation for calibration quality assessment. - [Circle Edge Misalignment](/metrical/results/residual_metrics/circle_edge_misalignment.md): This file explains circle edge misalignment metrics unique to MetriCal, which measure the bridging between camera 2D features and LiDAR 3D point clouds using circular calibration targets (specifically, this is about the edge misalignment cost). - [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md): This file explains circle misalignment metrics unique to MetriCal, which measure the bridging between camera 2D features and LiDAR 3D point clouds using circular calibration targets. - [Composed Relative Extrinsics](/metrical/results/residual_metrics/composed_relative_extrinsics.md): This file documents composed relative extrinsics error metrics in MetriCal, which measure consistency between relative extrinsics formed through object spaces and direct component transforms. - [Differenced Pose Trajectory Error](/metrical/results/residual_metrics/differenced_pose_trajectory_error.md): This file explains differenced pose trajectory errors in MetriCal, which is the error in aligning two different pose trajectories over the data capture period. - [Image Reprojection](/metrical/results/residual_metrics/image_reprojection.md): This file explains image reprojection error metrics in MetriCal, which measure the precision of camera calibration by comparing feature positions in images to their object space counterparts. - [IMU PreIntegration Error](/metrical/results/residual_metrics/imu_preintegration_error.md): This file documents IMU preintegration error metrics in MetriCal, which measure the consistency of IMU measurements across preintegration windows defined by navigation states. - [Interior Points to Plane Error](/metrical/results/residual_metrics/interior_points_to_plane_error.md): This file explains interior points to plane error metrics in MetriCal, which measure the fit between LiDAR points detected on circular target surfaces and the actual target plane. - [Object Inertial Extrinsics Error](/metrical/results/residual_metrics/object_inertial_extrinsic_error.md): This file explains object inertial extrinsics error metrics in MetriCal, which measure errors between sequences of measured and optimized extrinsics involving IMU navigation states. - [Paired 3D Point Error](/metrical/results/residual_metrics/paired_3d_point_error.md): This file documents paired 3D point error metrics in MetriCal, which measure LiDAR alignment quality by comparing detected circle centers between different LiDAR frames of reference. - [Paired Plane Normal Error](/metrical/results/residual_metrics/paired_plane_normal_error.md): This file documents paired plane normal error metrics in MetriCal, which measure LiDAR alignment quality by comparing detected plane normals of circle targets between LiDAR frames of reference. ### special_topics - [Calibrate RealSense Sensors](/metrical/special_topics/calibrate_realsense.md): This file provides a tutorial for calibrating multiple Intel RealSense sensors simultaneously using MetriCal, including data recording procedures and calibration flashing. - [Migrate from Kalibr to MetriCal](/metrical/special_topics/kalibr_to_metrical_migration.md): This file provides a comprehensive guide for migrating calibration workflows from Kalibr to MetriCal, highlighting operational differences and improved capabilities. ### support_and_admin - [Billing](/metrical/support_and_admin/billing.md): This file explains how to create, cancel, and manage MetriCal subscriptions, payment methods, and billing contact information through the Tangram Vision Hub. - [Contact Us](/metrical/support_and_admin/contact.md): This file provides contact information for MetriCal user support, partnering inquiries, and citation guidelines for academic research. - [Legal Information](/metrical/support_and_admin/legal.md): This file contains legal disclaimers, confidentiality notices, and comprehensive third-party license information for the Tangram Vision Platform and MetriCal software. ### targets - [Camera Targets](/metrical/targets/camera_targets.md): This file documents all supported calibration targets in MetriCal including AprilGrid, markerboards, checkerboards, and LiDAR circle targets with their specifications and usage guidelines. - [Combining Modalities](/metrical/targets/combining_modalities.md): This file documents how to combinne multiple modalities simultaneously in MetriCal using different target types. - [LiDAR Targets](/metrical/targets/lidar_targets.md): This file documents all supported calibration targets in MetriCal for LiDAR with their specifications and usage guidelines. - [Using Multiple Targets](/metrical/targets/multiple_targets.md): This file explains how to use multiple calibration targets simultaneously in MetriCal, including object space configuration, UUID generation, and target positioning for complex setups. - [Target Construction Guide](/metrical/targets/target_construction.md): This file provides comprehensive guidance on constructing calibration targets for MetriCal, including target selection, printing considerations, and quality control measures. - [Target Overview](/metrical/targets/target_overview.md): This file explains the different types of calibration targets supported by MetriCal. --- # Full Documentation Content # Camera ↔ IMU Calibration Guide Unlike other modalities supported by MetriCal, IMU calibrations do not require any specific object space or target, as these types of components (accelerometers and gyroscopes) measure forces and rotational velocities directly. Rolling Shutter Cameras MetriCal currently does not perform well with IMU ↔ camera calibrations if the camera is a rolling shutter camera. We advise calibrating with a global shutter camera whenever possible. ## Example Dataset and Manifest[​](#example-dataset-and-manifest "Direct link to Example Dataset and Manifest") We've captured an example of a good IMU ↔ Camera calibration dataset that you can use to test out MetriCal. If it's your first time performing an IMU calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. This dataset features: * Observations as an [MCAP](/metrical/configuration/data_formats.md) * Two infrared cameras * One color camera * One IMU * Three AprilGrid-style targets arranged in a box formation [📁Download: Camera-IMU Example Dataset](https://drive.google.com/drive/folders/1lFgVpRgUtblwlU0ht9KpaqMUyfOIuj7-?usp=drive_link) ### The Manifest[​](#the-manifest "Direct link to The Manifest") camera\_imu\_box\_manifest.toml ``` [project] name = "MetriCal Demo: Camera IMU Pipeline" version = "15.0.0" description = "Pipeline for running MetriCal on a camera-imu dataset, with 3 boards forming a box." workspace = "metrical-results" ## === VARIABLES === [project.variables.dataset] description = "Path to the input MCAP dataset." value = "camera_imu_box.mcap" [project.variables.object-space] description = "Path to the input object space JSON file." value = "camera_imu_box_objects.json" ## === STAGES === [stages.first-stage] command = "init" dataset = "{{variables.dataset}}" reference-source = [] topic-to-model = [ ["*infra*", "opencv-radtan"], ["*color*", "opencv-radtan"], ["*imu*", "scale-shear"] ] ... # ...more options... initialized-plex = "{{auto}}" [stages.second-stage] command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{first-stage.initialized-plex}}" input-object-space = "{{variables.object-space}}" camera-motion-threshold = "lenient" render = true ... # ...more options... detections = "{{auto}}" results = "{{auto}}" ``` Let's take a look at some of the important details of this manifest: * Our first stage, the Init command, is assigning all topics that match the `*infra*` pattern to the `opencv-radtan` model. Similar glob matching is used for color cameras and IMU topics. This is a convenient way to assign models to topics without needing to know the exact topic names ahead of time. Don't worry: MetriCal will yell at you if you have conflicting topics that match the same pattern. * Our second stage, the Calibrate command, has a `lenient` [camera motion threshold](/metrical/commands/calibration/calibrate.md#camera-motion-threshold). IMU calibration necessarily needs motion to excite all axes during capture, so it wouldn't be a great idea to filter out all of the motion in the dataset. * Our second stage is rendered! This flag will allow us to watch the detection phase of the calibration as it happens in real time. This can have a large impact on performance, but is invaluable for debugging data quality issues. Match Rerun Versions MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this manifest. ### Running the Manifest[​](#running-the-manifest "Direct link to Running the Manifest") With a copy of the dataset downloaded and the manifest file created, you should be ready to roll: ``` metrical run camera_imu_box_manifest.toml ``` ![A screenshot of the MetriCal detection visualization](/assets/images/camera_imu_visualization-4a246d6b29385a1513aa30664a8ef1cd.png) While the calibration is running, take specific note of the frequency and magnitude of the sensor motion, as well as the still periods following periods of motion. When it comes time to capturing your own data, try to replicate these motion patterns to the best of your ability. When the run finishes, you'll be left with three artifacts: * `initialized-plex.json`: Our initialized plex from the first stage. * `report.html`: a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.mcap`: a file containing the final calibration and various other metrics. You can learn more about results [here](/metrical/results/output_file.md) and about manipulating your results using `shape` commands [here](/metrical/commands/calibration/shape/shape_overview.md). And that's it! Hopefully this trial run will have given you a better understanding of how to capture your own IMU ↔ Camera calibration. ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") ### Maximize View of Object Spaces for Duration of Capture[​](#maximize-view-of-object-spaces-for-duration-of-capture "Direct link to Maximize View of Object Spaces for Duration of Capture") IMU calibrations are structured such that MetriCal jointly solves for the first order gyroscope and accelerometer biases in addition to solving for the relative IMU-from-camera extrinsic. This is done by comparing the world pose (or world extrinsic) of the camera between frames to the expected motion that is measured by the accelerometer and gyroscope of the IMU. Because of how this problem is posed, the best way to produce consistent, precise IMU ↔ camera calibrations is to maximize the visibility of one or more targets in the object space from one of the cameras being calibrated alongside the IMU. Put in a different way: avoid capturing sections of data where the IMU is recording but where no object space or target can be seen from any camera. Doing so can lead to misleading bias estimates. ### Excite All IMU Axes During Capture[​](#excite-all-imu-axes-during-capture "Direct link to Excite All IMU Axes During Capture") IMU calibrations are no different than any other modality in how they are entirely a data-driven process. In particular, the underlying data needs to demonstrate observed translational and rotational motions in order for MetriCal to understand the motion path that the IMU has followed. This is what is meant by "exciting" an IMU: accelerations and rotational velocities must be observable in the data (different enough from the underlying noise in the measurements) so as to be separately observable from e.g. the biases. This means that when capturing data to calibrate between an IMU and one or more cameras, it is important to move the IMU translationally across all accelerometer axes, and rotationally about all gyroscope axes. This motion can be repetitive so long as a sufficient magnitude of motion has been achieved. It is difficult to describe what that magnitude is as that magnitude is dependent on what kind of IMU is being calibrated (e.g. MEMS IMU, how large it is, what kinds of noise it measures, etc.). success We suggest alternating between periods of "excitement" or motion with the IMU and holding still so that the camera(s) can accurately and precisely measure the given object space. If you find yourself still having trouble getting a sufficient number of observations to produce reliable calibrations, we suggest bumping up the threshold for our motion filter heuristic when calling `metrical calibrate`. This is controlled by the `--camera-motion-threshold` flag. A value of 3.0 through 5.0 can sometimes improve the quality of the calibration a significant amount. ### Reduce Motion Blur in Camera Images[​](#reduce-motion-blur-in-camera-images "Direct link to Reduce Motion Blur in Camera Images") This advice holds for both multi-camera and IMU ↔ camera calibrations. It is often advisable to reduce the effects of motion in the images to produce more crisp, detailed images to calibrate against. Some ways to do this are to: 1. Always use a global shutter camera 2. Reduce the overall exposure time of the camera 3. Avoid over-exciting IMU motion, and don't be scared to slow down a little if you find you can't detect the object space much if at all. ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors and Troubleshooting](/metrical/commands/command_errors.md) documentation. Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data. --- # Camera ↔ LiDAR Calibration Guide Camera ↔ LiDAR calibration is unique: it represents the intersection of a 2D data format (images) with that of 3D (point clouds). In order to bridge the gap, MetriCal uses the board + circle target. This involves taping a retroreflective circle onto something that can be detected by a camera, like a ChArUco or AprilGrid. Read more about it in the [target guide on combining modalities](/metrical/targets/combining_modalities.md). No LiDAR Intrinsics... Yet. MetriCal doesn't currently calibrate LiDAR intrinsics. If you're interested in calibrating the intrinsics of your LiDAR for better accuracy and precision, get in touch! ## Common Camera-LiDAR Workflows[​](#common-camera-lidar-workflows "Direct link to Common Camera-LiDAR Workflows") MetriCal offers two approaches to Camera-LiDAR calibration: combined (all-at-once) and staged. The best approach depends on your specific hardware setup and calibration requirements. ### Combined Calibration[​](#combined-calibration "Direct link to Combined Calibration") The combined approach calibrates camera intrinsics and Camera-LiDAR extrinsics simultaneously, which typically yields the most accurate results. This is the approach that we take in the [example dataset below](#example-dataset-and-manifest). This approach works best when: * Your LiDAR and camera(s) can simultaneously view the calibration target * The circle target can be positioned to fill the camera's field of view * You have good control over the target positioning ### Staged Calibration[​](#staged-calibration "Direct link to Staged Calibration") For complex rigs where optimal camera calibration and optimal LiDAR-camera calibration require different setups, a staged approach is recommended: ``` # First stages: Camera-only calibration [stages.first-init] command = "init" dataset = "{{variables.camera-only-dataset}}" ... [stages.first-calibration] command = "calibrate" dataset = "{{variables.camera-only-dataset}}" input-plex = "{{first-init.initialized-plex}}" ... # Second stages: Camera-lidar calibration with # referenced camera intrinsics from first stages [stages.second-init] command = "init" dataset = "{{variables.camera-lidar-dataset}}" # >>> Use camera cal from first stages <<< reference-source = ["{{first-calibration.results}}"] ... [stages.second-calibration] command = "calibrate" dataset = "{{variables.camera-lidar-dataset}}" input-plex = "{{second-init.initialized-plex}}" ... # This will now have all intrinsics + extrinsics results = "{{auto}}" ``` This staged approach allows you to capture optimal data for camera intrinsics in one dataset, getting close to ensure full FOV coverage, and then use a different dataset optimized for Camera-LiDAR extrinsics in the second stage. The staged approach is particularly useful when: * Your camera is mounted in a hard-to-reach position * You need different viewing distances for optimal camera calibration vs. LiDAR-camera calibration * You're calibrating a complex rig with multiple sensors For tips on getting a great camera calibration, see the [Single Camera Calibration guide](/metrical/calibration_guides/single_camera_cal.md) and the [Multi-Camera Calibration guide](/metrical/calibration_guides/multi_camera_cal.md). ## Example Dataset and Manifest[​](#example-dataset-and-manifest "Direct link to Example Dataset and Manifest") We've captured an example of a good LiDAR ↔ Camera calibration dataset that you can use to test out MetriCal. If it's your first time performing a LiDAR calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. This dataset features: * Observations as a [folder of folders](/metrical/configuration/data_formats.md) * Two infrared cameras * One LiDAR * One [Camera-LiDAR multi-modal target](/metrical/targets/combining_modalities.md) [📁Download: Camera-LiDAR Example Dataset](https://drive.google.com/drive/folders/1YRIiaKXCjhZPtLrYNEjfZxlI2HWew7g7?usp=drive_link) ### The Manifest[​](#the-manifest "Direct link to The Manifest") camera\_lidar\_va\_manifest.toml ``` [project] name = "MetriCal Demo: Camera LiDAR VA Pipeline" version = "15.0.0" description = "Manifest for running MetriCal on a camera-lidar dataset" workspace = "metrical-results" ## === VARIABLES === [project.variables.dataset] description = "Path to the input dataset containing camera and lidar data." value = "camera_lidar_va_observations/" [project.variables.object-space] description = "Path to the input object space JSON file." value = "camera_lidar_va_objects.json" ## === STAGES === [stages.cam-lidar-init] command = "init" dataset = "{{variables.dataset}}" reference-source = [] topic-to-model = [["*infra*", "no-distortion"], ["velodyne_points", "lidar"]] ... # ...more options... initialized-plex = "{{auto}}" [stages.cam-lidar-calibrate] command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{ cam-lidar-init.initialized-plex }}" input-object-space = "{{variables.object-space}}" camera-motion-threshold = "disabled" lidar-motion-threshold = "strict" render = true ... # ...more options... detections = "{{auto}}" results = "{{auto}}" ``` Let's take a look at some of the important details of this manifest: * Our first stage, the Init command, is assigning all topics that match the `*infra*` pattern to the `no-distortion` model. This is a convenient way to assign models to topics without needing to know the exact topic names ahead of time. Don't worry: MetriCal will yell at you if you have conflicting topics that match the same pattern. * Our second stage, the Calibrate command, has a `disabled` [camera motion threshold](/metrical/commands/calibration/calibrate.md#camera-motion-threshold), but a `strict` [lidar motion threshold](/metrical/commands/calibration/calibrate.md#lidar-motion-threshold). This is a good way to handle datasets outside or in a parking lot, where there might be a lot of intensely reflective surfaces that could cause spurious LiDAR detections. * Our second stage is rendered! This flag will allow us to watch the detection phase of the calibration as it happens in real time. This can have a large impact on performance, but is invaluable for debugging data quality issues. Match Rerun Versions MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this manifest. ### Running the Manifest[​](#running-the-manifest "Direct link to Running the Manifest") With a copy of the dataset downloaded and the manifest file created, you should be ready to roll: ``` metrical run camera_lidar_va_manifest.toml ``` Immediately, MetriCal will open a visualization window like the following with a LiDAR point cloud and detections overlaid on the camera frames. Note that the LiDAR circle detections will show up as red points in the point cloud. ![A screenshot of the MetriCal detection visualization](/assets/images/camera_lidar_visualization-ac3e6f16bea2795d62fc1bdb61b84935.png) While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of camera coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture. When the run finishes, you'll be left with three artifacts: * `initialized-plex.json`: Our initialized plex from the first stage. * `report.html`: a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.mcap`: a file containing the final calibration and various other metrics. You can learn more about results [here](/metrical/results/output_file.md) and about manipulating your results using `shape` commands [here](/metrical/commands/calibration/shape/shape_overview.md). ### Residual Metrics[​](#residual-metrics "Direct link to Residual Metrics") For camera-LiDAR calibration, MetriCal outputs two key metrics: 1. **Circle misalignment RMSE**: Indicates how well the center of the markerboard (detected in camera space) aligns with the geometric center of the retroreflective circle (in LiDAR space). Typically, values around 3cm or less indicate a good calibration. 2. **Interior point-to-plane RMSE** (if `detect_interior_points` was enabled): Measures how well the interior points of the circle align with the plane of the circle. Values around 1cm or less are considered good. ![Camera-LiDAR residual metrics](/assets/images/lidar_summary_stats_demo-fa59cfce9ffebefe8e10e9a5c9649a93.png) ### Visualizing Results[​](#visualizing-results "Direct link to Visualizing Results") You can focus on LiDAR detections by double-clicking on `detections` in the Rerun interface ![LiDAR detections](/assets/images/livox_detections-6ad14bae5cf8019dda871a58e9116ce8.gif) To see the aligned camera-LiDAR view, double-click on a camera in the `corrections` space: ![Camera-LIVOX alignment](/assets/images/livox_demo-0ec26f87d62899a511ff4cf580df0b5e.gif) If the calibration quality doesn't meet your requirements, consider recapturing data at a slower pace or modifying the target with more/less retroreflectivity. ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") LiDAR ↔ Camera calibrations are currently an extrinsics-only calibration. Note that while we only solve for the extrinsics between the LiDAR and the camera components, the calibration is still a joint calibration as camera *intrinsics* are still estimated as well. ### Best Practices[​](#best-practices "Direct link to Best Practices") | DO | DON'T | | ----------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | | ✅ Vary distance from sensors to target a few times to capture a wider range of readings. | ❌ Stand at a single distance from the sensors. | | ✅ Make sure some of your captures fill much of the camera field of view. | ❌ Only capture poses where the target does not capture much of the cameras' fields of view. | | ✅ Rotate the target on its planar axis periodically to capture poses of the board at different orientations. | ❌ Keep the board in a single orientation for every pose. | | ✅ Maximize convergence angles by occasionally angling the target up/down and left/right during poses. | ❌ Keep the target in a single orientation, thereby minimizing convergence angles. | | ✅ Pause in between poses to eliminate motion blur effects. | ❌ Move the target constantly or rapidly during data capture. | | ✅ Bring the target close to each camera lens and capture target poses across each camera's entire field of view. | ❌ Stay distant from the cameras or only capture poses that cover part of each camera's field of view. | | ✅ Achieve convergence angles of 35° or more on each axis. | | ### Maximize Variation Across All 6 Degrees Of Freedom[​](#maximize-variation-across-all-6-degrees-of-freedom "Direct link to Maximize Variation Across All 6 Degrees Of Freedom") When capturing the circular target with your camera and LiDAR components, ensure that the target can be seen from a variety of poses by varying: * **Roll, pitch, and yaw rotations** of the target relative to the sensors * **X, Y, and Z translations** between poses MetriCal performs LiDAR-based calibrations best when the target can be seen from a range of different depths. Try moving forward and away from the target in addition to moving laterally relative to the sensors. ### Pause Between Different Poses[​](#pause-between-different-poses "Direct link to Pause Between Different Poses") MetriCal performs more consistently if there is less motion blur or motion-based artifacts in the captured data. For the best calibrations, pause for 1-2 seconds after every pose of the board. Constant motion in a dataset will typically yield poor results. ## Advanced Considerations[​](#advanced-considerations "Direct link to Advanced Considerations") ### Beware of LiDAR Noise[​](#beware-of-lidar-noise "Direct link to Beware of LiDAR Noise") Some LiDAR sensors can have significant noise when detecting retroreflective surfaces. This can cause a warping effect in the point cloud data, where points are spread out in a way that makes it difficult to detect the true surface of the circle. ![Warping in LiDAR Points from reflectance](/assets/images/lidar_bloom-2c22e69d8496d694691b9e83c5483dce.png) For Ouster LiDARs, this is caused by the retroreflective material saturating the photodiodes and affecting the time-of-flight estimation. To prevent this warping effect, you can lower the signal strength of the emitted beam by sending a `POST` request and modifying the `signal_multiplier` to 0.25 as shown in the [Ouster documentation](https://static.ouster.dev/sensor-docs/image_route1/image_route2/common_sections/API/http-api-v1.html#post-api-v1-sensor-config-multiple-configuration-parameters). ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors and Troubleshooting](/metrical/commands/command_errors.md) documentation. Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data. --- # Calibration Guide Overview MetriCal's calibration is a joint process, which includes calibrating intrinsics and extrinsics at the same time. This overview provides general guidelines for calibration before diving into the specific calibration guides. ## General Calibration Principles[​](#general-calibration-principles "Direct link to General Calibration Principles") The following principles apply to all camera calibration scenarios: ### Target Selection[​](#target-selection "Direct link to Target Selection") MetriCal supports [various calibration targets](/metrical/targets/target_overview.md). Choose a target appropriate for your sensor setup and ensure its measurements are precise and specified in meters. Purchase Targets, or Make Your Own You can now purchase calibration targets directly from our [online store](https://shop.tangramvision.com/). This store holds all the targets necessary to run through these calibration guides. If you are a company or facility that needs a more sophisticated target setup for automation or production line purposes, contact us at . That being said, you can always make your own! Find examples for AprilGrid, Markerboard, and Lidar targets in the [MetriCal Premade Targets repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) on GitLab. ### Valid Data Formats[​](#valid-data-formats "Direct link to Valid Data Formats") MetriCal takes a number of different input data types, but there are some special considerations that every user should know of. Reference the [Valid Data Formats](/metrical/configuration/data_formats.md) documentation for more details. ### Data Quality Considerations[​](#data-quality-considerations "Direct link to Data Quality Considerations") For all camera calibrations, regardless of how many cameras you're calibrating: 1. **Image Focus**: Always ensure targets are in focus 2. **Field of View Coverage**: Capture the target across the entire field of view 3. **Target Orientations**: When using multiple targets, rotate each one a different orientation 4. **Convergence Angles**: Vary the viewing angle to improve parameter estimation 5. **Motion Clarity**: Avoid motion blur by pausing between poses 6. **Depth Variation**: When possible, use non-planar targets or multiple targets at different depths ## Sensor Calibration Guides[​](#sensor-calibration-guides "Direct link to Sensor Calibration Guides") Good sensor calibration is key to building accurate perception systems. MetriCal helps you calibrate many types of sensors—from a single camera to complex multi-sensor setups, including local navigation systems. Our guides take you through each step of the process, from collecting data to checking your results. We cover single cameras, multiple cameras, camera-LiDAR combinations, and indoor positioning systems with clear instructions and practical tips. Full multi-modal calibrations are also possible in MetriCal. These multi-modal calibrations follow all the same guidelines as their more specialized guides. Choose the guide below that matches your setup to get started. [![Single Camera Calibration](/_assets/guide_single_camera.png)](/metrical/calibration_guides/single_camera_cal.md) ### [Single Camera Calibration](/metrical/calibration_guides/single_camera_cal.md) [Calibrate intrinsics for a single camera](/metrical/calibration_guides/single_camera_cal.md) [View Guide →](/metrical/calibration_guides/single_camera_cal.md) [![Multi-Camera Calibration](/_assets/guide_multi_camera.png)](/metrical/calibration_guides/multi_camera_cal.md) ### [Multi-Camera Calibration](/metrical/calibration_guides/multi_camera_cal.md) [Calibrate multiple cameras together](/metrical/calibration_guides/multi_camera_cal.md) [View Guide →](/metrical/calibration_guides/multi_camera_cal.md) [![Narrow FOV Camera Calibration](/_assets/guide_narrow_fov_camera.png)](/metrical/calibration_guides/narrow_fov_cal.md) ### [Narrow FOV Camera Calibration](/metrical/calibration_guides/narrow_fov_cal.md) [Calibrate narrow FOV cameras with multiple targets](/metrical/calibration_guides/narrow_fov_cal.md) [View Guide →](/metrical/calibration_guides/narrow_fov_cal.md) [![Camera ↔ LiDAR Calibration](/_assets/guide_camera_lidar.png)](/metrical/calibration_guides/camera_lidar_cal.md) ### [Camera ↔ LiDAR Calibration](/metrical/calibration_guides/camera_lidar_cal.md) [Calibrate cameras with LiDAR sensors](/metrical/calibration_guides/camera_lidar_cal.md) [View Guide →](/metrical/calibration_guides/camera_lidar_cal.md) [![Camera ↔ IMU Calibration](/_assets/guide_camera_imu.png)](/metrical/calibration_guides/camera_imu_cal.md) ### [Camera ↔ IMU Calibration](/metrical/calibration_guides/camera_imu_cal.md) [Calibrate cameras with IMU sensors](/metrical/calibration_guides/camera_imu_cal.md) [View Guide →](/metrical/calibration_guides/camera_imu_cal.md) [![LiDAR ↔ LiDAR Calibration](/_assets/guide_lidar_lidar.png)](/metrical/calibration_guides/lidar_lidar_cal.md) ### [LiDAR ↔ LiDAR Calibration](/metrical/calibration_guides/lidar_lidar_cal.md) [Calibrate multiple LiDAR sensors](/metrical/calibration_guides/lidar_lidar_cal.md) [View Guide →](/metrical/calibration_guides/lidar_lidar_cal.md) [![Local Navigation Systems Calibration](/_assets/guide_local_navigation.png)](/metrical/calibration_guides/local_navigation_cal.md) ### [Local Navigation Systems Calibration](/metrical/calibration_guides/local_navigation_cal.md) [Calibrate local tracking systems relative to other sensors](/metrical/calibration_guides/local_navigation_cal.md) [View Guide →](/metrical/calibration_guides/local_navigation_cal.md) ## Calibration Workflow[​](#calibration-workflow "Direct link to Calibration Workflow") 1. **Setup**: Prepare your sensor(s) and calibration target(s) 2. **Data Capture**: Follow the guidelines in the specific calibration guide 3. **Processing**: Run MetriCal on the captured data 4. **Validation**: Review the [calibration results](/metrical/results/report.md) and [data diagnostics](/metrical/results/report.md#data-diagnostics) 5. **Refinement**: If needed, capture additional data to improve results For detailed instructions on each calibration type, refer to the specific guides linked above. --- # LiDAR ↔ LiDAR Calibration Guide LiDAR ↔ LiDAR calibration is purely extrinsic. This means that it suffers from all of the same maladies that IMU and Local Navigation Systems do: without constraining every degree of freedom, calibration will be difficult. Luckily, we can get this done with a simple [Circle detector](/metrical/targets/lidar_targets.md). This is just a circle of retroreflective tape on a flat surface. That's it! In most MetriCal setups, this circle is often taped onto another target type for multi-purpose use, like a ChArUco board or AprilGrid. However, there's no requirement to do so if you're just calibrating LiDARs. ## Example Dataset and Manifest[​](#example-dataset-and-manifest "Direct link to Example Dataset and Manifest") We've captured an example of a good LiDAR ↔ LiDAR calibration dataset that you can use to test out MetriCal. If it's your first time performing a LiDAR calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. This dataset features: * Observations as an [MCAP](/metrical/configuration/data_formats.md) * Two LiDAR sensors * One [circle target](/metrical/targets/lidar_targets.md) [📁Download: LiDAR Example Dataset](https://drive.google.com/drive/folders/1BsbRxQF7jjBH8R29aWFSImMDOW5Tgxrn?usp=drive_link) ## The Manifest[​](#the-manifest "Direct link to The Manifest") lidar\_lidar\_manifest.toml ``` [project] name = "MetriCal Demo: LiDAR-LiDAR Manifest" version = "15.0.0" description = "Manifest for running MetriCal on a dataset with multiple LiDAR." workspace = "metrical-results" ## === VARIABLES === [project.variables.dataset] description = "Path to the input dataset containing calibration data." value = "lidar_lidar_va.mcap" [project.variables.object-space] description = "Path to the input object space JSON file." value = "lidar_lidar_va_objects.json" ## === STAGES === [stages.lidar-init] command = "init" dataset = "{{variables.dataset}}" reference-source = [] topic-to-model = [["/livox/lidar", "lidar"], ["/velodyne_points", "lidar"]] remap-reference-component = [] overwrite-strategy = "replace" uuid-strategy = "inherit-reference" initialized-plex = "{{auto}}" [stages.lidar-calibrate] command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{lidar-init.initialized-plex}}" input-object-space = "{{variables.object-space}}" optimization-profile = "standard" lidar-motion-threshold = "strict" preserve-input-constraints = false disable-ore-inference = false overwrite-detections = false override-diagnostics = false render = true detections = "{{auto}}" results = "{{auto}}" ``` Note that our second stage is rendered. This flag will allow us to watch the detection phase of the calibration as it happens in real time. This can have a large impact on performance, but is invaluable for debugging data quality issues. Match Rerun Versions MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this manifest. ### Running the Manifest[​](#running-the-manifest "Direct link to Running the Manifest") With a copy of the dataset downloaded and the manifest file created, you should be ready to roll: ``` metrical run lidar_lidar_manifest.toml ``` When you start it, it will display a visualization window like the following with two LiDAR point clouds in the same coordinate frame (but not registered to one another yet). Note that the LiDAR circle detections will show up as red points in the point cloud. ![A screenshot of the MetriCal detection visualization](/assets/images/lidar_lidar_visualization-ba494a8de737c0615f8dc328e6d5b077.png) While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture. When the run finishes, you'll be left with three artifacts: * `initialized-plex.json`: Our initialized plex from the first stage. * `report.html`: a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.mcap`: a file containing the final calibration and various other metrics. You can learn more about results [here](/metrical/results/output_file.md) and about manipulating your results using `shape` commands [here](/metrical/commands/calibration/shape/shape_overview.md). ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") Many of the tips for LiDAR ↔ LiDAR data capture are similar to [Camera ↔ LiDAR capture](/metrical/calibration_guides/camera_lidar_cal.md). Below we've outlined best practices for capturing a dataset that will consistently produce a high-quality calibration between two LiDAR components. ### Best Practices[​](#best-practices "Direct link to Best Practices") | DO | DON'T | | ----------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | | ✅ Vary distance from sensors to target to capture a wider range of readings. | ❌ Stand at a single distance from the sensors. | | ✅ Ensure good point density on the target for all LiDAR sensors. | ❌ Only capture poses where the target does not have sufficient point density. | | ✅ Rotate the target on its planar axis to capture poses at different orientations. | ❌ Keep the board in a single orientation for every pose. | | ✅ Maximize convergence angles by angling the target up/down and left/right during poses. | ❌ Keep the target in a single orientation, minimizing convergence angles. | | ✅ Pause in between poses (1-2 seconds) to eliminate motion artifacts. | ❌ Move the target constantly or rapidly during data capture. | | ✅ Achieve convergence angles of 35° or more on each axis when possible. | | ### Maximize Variation Across All 6 Degrees Of Freedom[​](#maximize-variation-across-all-6-degrees-of-freedom "Direct link to Maximize Variation Across All 6 Degrees Of Freedom") When capturing the circular target with multiple LiDAR components, ensure that the target can be seen from a variety of poses by varying: * **Roll, pitch, and yaw rotations** of the target relative to the LiDAR sensors * **X, Y, and Z translations** between poses MetriCal performs LiDAR-based calibrations best when the target can be seen from a range of different depths. Try moving forward and away from the target in addition to moving laterally relative to the sensors. The field of view of a LiDAR component is often much greater than a camera, so be sure to capture data that fills this larger field of view. ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors and Troubleshooting](/metrical/commands/command_errors.md) documentation. Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data. --- # Camera ↔ LNS Guide This guide walks you through the process of calibrating local navigation systems (LNS, aka odometry) using MetriCal. Local navigation systems are represented by odometry messages, and provide precise positioning in indoor or GPS-denied environments. Rolling Shutter Cameras MetriCal currently does not perform well with LNS ↔ camera calibrations if the camera is a rolling shutter camera. We advise calibrating with a global shutter camera whenever possible. ## Example Dataset and Manifest[​](#example-dataset-and-manifest "Direct link to Example Dataset and Manifest") Thanks, NVIDIA! We'd like to give a special thanks to our friends at NVIDIA for letting us borrow a [Nova Carter](https://robotics.segway.com/nova-carter/) robot to capture the example dataset used in this guide. If you're using your own Nova Carter, you should be able to use recorded data from it directly with MetriCal. If it's your first time performing a local navigation system calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. This dataset features: * Observations as an [MCAP](/metrical/configuration/data_formats.md) * Four color cameras * One local navigation system (odometry stream) * Two [markerboard targets](/metrical/targets/camera_targets.md) [📁Download: LNS Example Dataset](https://drive.google.com/drive/folders/1yaR-q6yHdUzzaD60AO_LA3kqhRwdEqtx?usp=drive_link) ## The Manifest[​](#the-manifest "Direct link to The Manifest") local\_nav\_pacifica\_manifest.toml ``` [project] name = "MetriCal Demo: Local Navigation System Manifest" version = "15.0.0" description = "Manifest for running MetriCal with a local navigation system." workspace = "metrical-results" ## === VARIABLES === [project.variables.dataset] description = "Path to the input dataset containing calibration data." value = "local_nav_pacifica.mcap" [project.variables.object-space] description = "Path to the input object space JSON file." value = "local_nav_pacifica_objects.json" ## === STAGES === [stages.lns-init] command = "init" dataset = "{{variables.dataset}}" topic-to-model = [["*image*", "eucm"], ["*odom*", "lns"]] ... # ...more options... initialized-plex = "{{auto}}" [stages.lns-calibrate] command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{lns-init.initialized-plex}}" input-object-space = "{{variables.object-space}}" camera-motion-threshold = "disabled" render = true ... # ...more options... detections = "{{auto}}" results = "{{auto}}" ``` Before running the manifest, let's take note of a couple things: * This manifest disables the camera motion filter entirely. Local navigation systems rely on consistent motion to produce good results, so we don't want to filter out anything. * Our second stage is rendered. This flag will allow us to watch the detection phase of the calibration as it happens in real time. This can have a large impact on performance, but is invaluable for debugging data quality issues. Match Rerun Versions MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this manifest. ### Running the Manifest[​](#running-the-manifest "Direct link to Running the Manifest") With a copy of the dataset downloaded and the manifest file created, you should be ready to roll: ``` metrical run local_nav_pacifica_manifest.toml ``` You'll see this output before the optimization step begins: ``` × Motion profile issues detected for LNS /chassis/odom (55c18322): │ - Limited velocity change (condition number > 100) │ - Limited angular velocity change (condition number > 100) │ - Insufficient variance in velocity measurements │ - Insufficient variance in angular velocity measurements │ - Motion deficient in a single axis in velocity │ - Motion deficient in a single axis in angular velocity help: The LNS motion profile analysis detected issues with the trajectory of this LNS. Please review the detected issues and consider re-collecting data with improved motion characteristics. If you would like to process the rest of this calibration, re-init the plex without this component and rerun the calibration. ``` ...which is true, actually! This dataset was captured with a [Nova Carter](https://robotics.segway.com/nova-carter/), which those with keen eyes will recognize as a vehicle that drives, rather than flying. This means that its motion is mostly planar, which is not ideal for LNS calibration. ![Nova Carter degrees of freedom](/assets/images/carter-0fa330d265b7246da93f7ea77598c8eb.png) MetriCal will still proceed with the calibration, but it's worth noting that the results may not be as good as they could be with a more exciting motion profile. When the run finishes, you'll be left with three artifacts: * `initialized-plex.json`: Our initialized plex from the first stage. * `report.html`: a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.mcap`: a file containing the final calibration and various other metrics. You can learn more about results [here](/metrical/results/output_file.md) and about manipulating your results using `shape` commands [here](/metrical/commands/calibration/shape/shape_overview.md). And that's it! Local navigation system calibration is a bit simpler than other sensor calibrations, so you should be able to run through this example quickly. Hopefully this trial run will have given you a better understanding of how to capture your own local navigation system calibration. ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") ### Maximize View of Object Spaces for Duration of Capture[​](#maximize-view-of-object-spaces-for-duration-of-capture "Direct link to Maximize View of Object Spaces for Duration of Capture") Similar to IMU, LNS calibration is done by comparing the world pose (or world extrinsic) of the camera between frames to the interpolated motion of the local navigation system. Because of how this problem is posed, the best way to produce consistent, precise LNS ↔ camera calibrations is to maximize the visibility of one or more targets in the object space from one of the cameras being calibrated alongside the LNS. Put in a different way: avoid capturing sections of data where the LNS is recording but where no object space or target can be seen from any camera. Doing so can lead to misleading bias estimates. ### Excite All Axes During Capture[​](#excite-all-axes-during-capture "Direct link to Excite All Axes During Capture") LNS calibrations are no different than any other modality in how they are entirely a data-driven process. In particular, the underlying data needs to demonstrate observed translational and rotational motions in order for MetriCal to understand the motion path that the LNS has followed. This is what is meant by "exciting" an LNS: accelerations and rotational velocities must be observable in the data (different enough from the underlying noise in the measurements) so as to be separately observable from e.g. the biases. This means that when capturing data to calibrate between an LNS and one or more cameras, it is important to move the LNS in all 6 degrees of freedom. This motion can be repetitive so long as a sufficient magnitude of motion has been achieved. success We suggest alternating between periods of "excitement" or motion with the LNS and holding still so that the camera(s) can accurately and precisely measure the given object space. If you find yourself still having trouble getting a sufficient number of observations to produce reliable calibrations, we suggest bumping up the threshold for our motion filter heuristic when calling `metrical calibrate`. This is controlled by the `--camera-motion-threshold` flag. A value of 3.0 through 5.0 can sometimes improve the quality of the calibration a significant amount. ### Reduce Motion Blur in Camera Images[​](#reduce-motion-blur-in-camera-images "Direct link to Reduce Motion Blur in Camera Images") This advice holds for both multi-camera and LNS ↔ camera calibrations. It is often advisable to reduce the effects of motion in the images to produce more crisp, detailed images to calibrate against. Some ways to do this are to: 1. Always use a global shutter camera 2. Reduce the overall exposure time of the camera 3. Avoid over-exciting LNS motion, and don't be scared to slow down a little if you find you can't detect the object space much if at all. ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors and Troubleshooting](/metrical/commands/command_errors.md) documentation. Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data. --- # Multi-Camera Calibration Guide MetriCal's multi-camera calibration is a joint process, which includes calibrating both intrinsics and extrinsics simultaneously. This guide provides specific tips for calibrating multiple cameras together. ## Common Multi-Camera Workflows[​](#common-multi-camera-workflows "Direct link to Common Multi-Camera Workflows") MetriCal offers two approaches to multi-camera calibration: combined (all-at-once) and staged. The best approach depends on your specific hardware setup and calibration requirements. ### Combined Calibration[​](#combined-calibration "Direct link to Combined Calibration") If all your cameras can simultaneously view calibration targets, you can run a direct multi-camera calibration. This is the approach that we take in the [example dataset below](#example-dataset-and-manifest). This approach works best when: * You have a rig where all cameras can see the same target(s) at the same time * You have good control over the target positioning ### Staged Calibration[​](#staged-calibration "Direct link to Staged Calibration") For large or complex camera rigs where: * Cameras are mounted far apart * Some cameras are in hard-to-reach positions * It's difficult to have all cameras view targets simultaneously A staged approach is recommended: ``` # First stages: Camera-only calibration [stages.first-init] command = "init" dataset = "{{variables.camera-one-dataset}}" ... [stages.first-calibration] command = "calibrate" dataset = "{{variables.camera-one-dataset}}" input-plex = "{{first-init.initialized-plex}}" ... # Second stages: Extrinsics calibration with # referenced camera intrinsics from first stages [stages.second-init] command = "init" dataset = "{{variables.camera-extrinsics-dataset}}" # >>> Use camera cal from first stages <<< reference-source = ["{{first-calibration.results}}"] ... [stages.second-calibration] command = "calibrate" dataset = "{{variables.camera-extrinsics-dataset}}" input-plex = "{{second-init.initialized-plex}}" ... # This will now have all intrinsics + extrinsics results = "{{auto}}" ``` This staged approach allows you to capture optimal data for each camera's intrinsics, getting close to each camera to fill its FOV, before moving on to extrinsics calibration. Using a staged approach, you can avoid the logistical challenges of trying to get optimal data for all cameras simultaneously. For details on calibrating individual cameras, see the [Single Camera Calibration guide](/metrical/calibration_guides/single_camera_cal.md). ## Example Dataset and Manifest[​](#example-dataset-and-manifest "Direct link to Example Dataset and Manifest") We've captured an example of a good multi-camera calibration dataset that you can use to test out MetriCal. If it's your first time performing a multi-camera calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. This dataset features: * Observations as a [folder of folders](/metrical/configuration/data_formats.md) * Two infrared cameras * One color camera * One [markerboard target](/metrical/targets/camera_targets.md) [📁Download: Camera Example Dataset](https://drive.google.com/drive/folders/1xQDVBTW41xlnZfREliCuvEXFchaXwTr1?usp=drive_link) ### The Manifest[​](#the-manifest "Direct link to The Manifest") camera\_only\_cortland\_manifest.toml ``` [project] name = "MetriCal Demo: Multi-Camera Manifest" version = "15.0.0" description = "Manifest for running MetriCal on a camera dataset." workspace = "metrical-results" ## === VARIABLES === [project.variables.dataset] description = "Path to the input dataset containing camera and lidar data." value = "camera_only_cortland_data/" [project.variables.object-space] description = "Path to the input object space JSON file." value = "camera_only_cortland_objects.json" ## === STAGES === [stages.cam-init] command = "init" dataset = "{{variables.dataset}}" topic-to-model = [ ["09*", "no-distortion"], ["f7*", "no-distortion"], ["ff*", "opencv-radtan"] ] overwrite-strategy = "replace" ... # ...more options... initialized-plex = "{{auto}}" [stages.cam-calibrate] command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{cam-init.initialized-plex}}" input-object-space = "{{variables.object-space}}" camera-motion-threshold = "disabled" overwrite-detections = true render = true ... # ...more options... detections = "{{auto}}" results = "{{auto}}" ``` Before running the manifest, let's take note of a couple things: * Our overwrite strategy for our `cam-init` stage is set to `replace`, and we're also overwriting detections in the `cam-calibrate` stage. If you go through the [multi-camera calibration guide](/metrical/calibration_guides/multi_camera_cal.md), which uses the same dataset, you'll see that `preserve`-ing your initialized plex will also keep prior detections. To be safe, we'll just do everything again. * Our second stage is rendered. This flag will allow us to watch the detection phase of the calibration as it happens in real time. This can have a large impact on performance, but is invaluable for debugging data quality issues. Match Rerun Versions MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this manifest. ### Running the Manifest[​](#running-the-manifest "Direct link to Running the Manifest") With a copy of the dataset downloaded and the manifest file created, you should be ready to roll: ``` metrical run camera_only_cortland_manifest.toml ``` ![A screenshot of the MetriCal detection visualization](/assets/images/multicam_visualization-aa3e330f19d5f96b23609f9aff7fadae.png) While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of camera coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture. When the run finishes, you'll be left with three artifacts: * `initialized-plex.json`: Our initialized plex from the first stage. * `report.html`: a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.mcap`: a file containing the final calibration and various other metrics. You can learn more about results [here](/metrical/results/output_file.md) and about manipulating your results using `shape` commands [here](/metrical/commands/calibration/shape/shape_overview.md). ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") ### Best Practices[​](#best-practices "Direct link to Best Practices") | DO | DON'T | | ---------------------------------------------------------------------- | ------------------------------------------------------------------------ | | ✅ Ensure target is visible to multiple cameras simultaneously. | ❌ Calibrate each camera independently without overlap. | | ✅ Maximize overlap between camera views. | ❌ Have overlap only at the peripheries of wide-angle lenses. | | ✅ Keep targets in focus in all cameras. | ❌ Capture blurry or out-of-focus images in any camera. | | ✅ Capture the target across the entire field of view for each camera. | ❌ Only place the target in a small part of each camera's field of view. | | ✅ Rotate the target 90° for some captures. | ❌ Keep the target in only one orientation. | | ✅ Capture the target from various angles to maximize convergence. | ❌ Only capture the target from similar angles. | | ✅ Pause between poses to avoid motion blur. | ❌ Move the target continuously during capture. | ### Maximize Overlap Between Images[​](#maximize-overlap-between-images "Direct link to Maximize Overlap Between Images") While it's important to fill the full field-of-view of each individual camera to determine distortions, for multi-camera calibration, cameras must **jointly observe the same object space** to determine the relative extrinsics between them. Once you've observed across the entire field-of-view of each camera individually, focus on capturing the object space in multiple cameras from the same position. The location of this overlap is also important. For example, when working with very-wide field-of-view lenses, having overlap only at the peripheries can sometimes produce odd results, because the overlap is largely contained in high distortion areas of the image. Aim for overlap in varying regions of the cameras' fields of view. ### Using Multiple Targets[​](#using-multiple-targets "Direct link to Using Multiple Targets") When working with multiple cameras, using multiple calibration targets can be particularly beneficial. This provides: 1. Better depth variation, which helps reduce projective compensation 2. More opportunities for overlap between cameras with different fields of view or orientations 3. Improved extrinsics estimation between cameras Read more about [using multiple targets here](/metrical/targets/multiple_targets.md). ### Basic Camera Calibration Principles[​](#basic-camera-calibration-principles "Direct link to Basic Camera Calibration Principles") All of the principles that apply to single camera calibration also apply to each camera in a multi-camera setup: #### Keep Targets in Focus[​](#keep-targets-in-focus "Direct link to Keep Targets in Focus") Ensure all cameras in your system are focused properly. A lens focused at infinity is recommended for calibration. Knowing the depth of field for each camera helps ensure you never get blurry images in your data. #### Consider Target Orientations[​](#consider-target-orientations "Direct link to Consider Target Orientations") Collect data where the target is captured at both 0° and 90° orientations to de-correlate errors in x and y measurements. This applies to all cameras in your multi-camera setup. #### Fill the Full Field of View[​](#fill-the-full-field-of-view "Direct link to Fill the Full Field of View") For each camera in your setup, ensure you capture the target across the entire field of view, especially near the edges where distortion is greatest. #### Maximize Convergence Angles[​](#maximize-convergence-angles "Direct link to Maximize Convergence Angles") The convergence angle of each camera's pose relative to the object space is important. Aim for convergence angles of 70° or greater when possible. ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors and Troubleshooting](/metrical/commands/command_errors.md) documentation. Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data. --- # Narrow Field-of-View Camera Calibration Narrow field-of-view (NFOV) cameras are naturally challenging to calibrate, due to an inability to observe the focal length parameter of the camera when using typical planar targets. ## Prerequisites[​](#prerequisites "Direct link to Prerequisites") Before getting started, you should familiarize yourself with the camera calibration process by reviewing the [Single Camera Calibration guide](/metrical/calibration_guides/single_camera_cal.md). Many of the same guidelines for data capture, best practices, and procedures apply here as well. Tangram Vision Blog: NFOV Calibration Made Easy(er) If you're looking for the "why", and not the "how" (this guide), we've got a detailed blog post which dives into the background and theory of narrow field-of-view camera calibration: [Narrow FOV Calibration Made Easy(er)](https://www.tangramvision.com/blog/nfov-calibration-made-easier) ## Physical Constraints[​](#physical-constraints "Direct link to Physical Constraints") Narrow field-of-view means a longer focal length, and longer focal lengths require more depth variation in each target to calibrate correctly. However, calibrating *very* long focal lengths, e.g. with NFOV cameras with 25m depths of field, would require a massive board setup to achieve sufficient depth variation. Think 8m2 boards placed 25m away from the camera! MetriCal gets around this by allowing you to combine multiple targets at different depths into a single object space; we call this *consolidation*. By effectively surveying multiple targets at different depths, we can achieve sufficient depth variation to constrain the focal length parameter without requiring an impractically large target. There's a [whole command](/metrical/commands/calibration/consolidate.md) dedicated to merging calibrated object spaces like this, and we'll be leveraging it here. Given the nature of NFOV camera calibration, the process has the following additional physical requirements: * Multiple calibration targets (at least two). These need to be compatible with simultaneous use, i.e. they should not have overlapping fiducial IDs. See [Multiple Targets](/metrical/targets/multiple_targets.md) for more details. * An additional camera that can be more easily calibrated, e.g., a wide field-of-view camera Here's a snapshot from our [synthetic example dataset](#example-dataset-and-manifest) to give you an idea of a typical target setup for NFOV calibration: ![Two targets at different depths](/assets/images/narrow_multiboard-efb31acd247e60aef38cc9c588d547fe.png) ## Procedure Overview[​](#procedure-overview "Direct link to Procedure Overview") The general procedure is broken into two phases: a survey phase and an NFOV calibration phase. Note that *this data can be gathered from the same run* if using a wide FOV camera + narrow FOV camera stereo pair. This is what we do in our [example dataset](#example-dataset-and-manifest). ### Phase 1: Survey[​](#phase-1-survey "Direct link to Phase 1: Survey") * A target field with targets at varying depths is constructed. * Data is captured with a more easily calibrated camera (i.e. a wider FOV one). * The camera is calibrated and the target field is surveyed with MetriCal either in one pass or separately. This is done with the [Calibrate command](/metrical/commands/calibration/calibrate.md). * The target field survey and calibration are used to create a consolidated object space configuration for the target field using the [Consolidate Object Spaces command](/metrical/commands/calibration/consolidate.md). ![Surveying your object spaces](/assets/images/consolidate-d4534ae0ce33dbd762cff6f21832dcc2.png) ### Phase 2: NFOV Calibration[​](#phase-2-nfov-calibration "Direct link to Phase 2: NFOV Calibration") * Data is captured with the narrow field-of-view camera. The camera should observe the target field as it was surveyed. Generally the same sorts of data capture requirements apply as always. The image space should be well-covered. * The narrow field-of-view dataset and consolidated object space configuration are used to calibrate the narrow field-of-view camera. ![Completing NFOV calibration](/assets/images/nfov_cal-baa53deb10c95255786d1982f6f14601.png) ### Target Visibility Requirements[​](#target-visibility-requirements "Direct link to Target Visibility Requirements") *Target Visibility On Survey*: The survey needs to observe multiple targets within a single image to infer the relationship between them. This doesn't necessarily mean that all targets need to be visible at once; however, there must be some overlap between the targets in the surveying camera's field of view such that the relationship between all targets can be inferred. *Target Visibility on Calibration*: Similarly, the narrow field-of-view camera data capture process should follow [typical calibration best practices](/metrical/calibration_guides/single_camera_cal.md) while also observing multiple targets in each frames. Observing a single target per frame *is not sufficient* to constrain the focal length parameter. Also it is very important that the target field is observed in the same physical configuration in which it was surveyed. If the targets move relative to each other between the survey and narrow field-of-view data capture, the calibration will be incorrect. ## Example Dataset and Manifest[​](#example-dataset-and-manifest "Direct link to Example Dataset and Manifest") This dataset features: * Observations as a [folder of folders](/metrical/configuration/data_formats.md) * One wide field-of-view camera, which is designated as the surveying camera * One narrow field-of-view camera with a focal length of approximately 3248px (very narrow!) * Two [aprilgrid targets](/metrical/targets/camera_targets.md) positioned 2m away from one another in depth We've also added a small amount of radial distortion to the cameras to keep things interesting. [📁Download: Narrow FOV Camera Example Dataset](https://drive.google.com/drive/folders/1SrcqTwaXuUc3l7D3px12y3tmvl-gQPHs?usp=drive_link) ## The Manifest[​](#the-manifest "Direct link to The Manifest") This manifest is larger than most, since it covers both the surveying and narrow field-of-view calibration phases. We'll break it down step-by-step in the sections that follow. narrow\_fov\_manifest.toml ``` [project] name = "MetriCal Demo: Narrow FOV Manifest" version = "15.0.0" description = "Manifest for running MetriCal for a camera with a narrow field of view." workspace = "metrical-results" ## === VARIABLES === [project.variables.dataset] description = "Path to the input dataset containing calibration data." value = "narrow_fov_observations" [project.variables.object-space] description = "Path to the input object space JSON file." value = "narrow_fov_objects.json" ## === STAGES === [stages.survey-camera-init] command = "init" dataset = "{{variables.dataset}}" topic-to-model = [["survey_camera", "opencv-radtan"]] ... # ...more options... initialized-plex = "{{auto}}" [stages.nfov-camera-init] command = "init" dataset = "{{variables.dataset}}" topic-to-model = [["nfov_camera", "opencv-radtan"]] ... # ...more options... initialized-plex = "{{auto}}" [stages.survey-camera-calibrate] command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{survey-camera-init.initialized-plex}}" input-object-space = "{{variables.object-space}}" camera-motion-threshold = "disabled" ... # ...more options... detections = "{{auto}}" results = "{{auto}}" [stages.consolidate] command = "consolidate" input-object-space = "{{survey-camera-calibrate.results}}" consolidated-object-space = "{{auto}}" overwrite = true [stages.nfov-camera-calibrate] command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{nfov-camera-init.initialized-plex}}" input-object-space = "{{consolidate.consolidated-object-space}}" camera-motion-threshold = "disabled" ... # ...more options... detections = "{{auto}}" results = "{{auto}}" ``` ### Running the Manifest[​](#running-the-manifest "Direct link to Running the Manifest") Run your manifest as usual with MetriCal: ``` metrical run narrow_fov_manifest.toml ``` ### Phase 1: Survey[​](#phase-1-survey-1 "Direct link to Phase 1: Survey") Our first stages, `survey-camera-init` and `nfov-camera-init`, will create individual plexes for each of the cameras. Normally, we suggest creating a plex for all your sensors at once, but we only want to use the surveying camera (our wide field-of-view camera) for the surveying phase of this procedure. Thus, we will deliberately leave the narrow field-of-view camera out of the first calibration phase. Survey, then Whatever Once your surveying camera is calibrated, and the target field is surveyed, you can use the consolidated object space to calibrate any type of camera, not just narrow field-of-view. However, narrow field-of-view cameras are the only type that really *require* the surveyed space. Next, the `survey-camera-calibrate` stage does two things (as all calibrate stages do): it calibrates the wide field-of-view camera and surveys the target field, aka optimizing the plex and object space. However, an optimized object space is still made up of separate objects; we run the `consolidate` stage to merge it into one consolidated object space configuration, a single feature that our narrow field-of-view camera can use to calibrate against. If you requested visualization during the `consolidate` command, Rerun will pop up and you should see something like this: ![consolidated object space](/assets/images/consolidated_object_space-1c135cd4cc9a50a5c0d575585dc8e640.png) Targets will be grouped together based on the mutual target observations in the data. As previously noted, we can only consolidate targets that were observed together in the same images. MetriCal will stitch together target groups even if they were not directly observed together, as long as there is a chain of mutual observations connecting them. So if we see target A and B together in some images, and targets B and C together in other images, MetriCal can infer the relationship between A and C, even if they were never observed together. If there are no mutual observations of a target, it will be left unconsolidated. Also it's possible to have multiple disjoint groups of consolidated targets if there are no mutual observations connecting them. It's useful to inspect Rerun or the consolidated object space JSON to ensure that the intended consolidation takes place. If something looks wrong, you may need to recapture survey data with better target overlap. ### Phase 2: NFOV Calibration[​](#phase-2-nfov-calibration-1 "Direct link to Phase 2: NFOV Calibration") From here, calibration mostly proceeds as normal. The `nfov-camera-calibrate` stage uses the narrow field-of-view camera data and the consolidated object space configuration to calibrate the camera. Sure enough, we get good results: ![Narrow field-of-view calibration results](/assets/images/narrow_focal-16845d70354e5ac39204f47f46293c11.png) Such narrow! So fov! Wow. For reference, the simulated narrow field-of-view camera had a focal length of 3248px. This means that, on our synthetic dataset, MetriCal produced a focal length error of around *0.097%*. Not bad! ## Surveying: It's Worth It[​](#surveying-its-worth-it "Direct link to Surveying: It's Worth It") This procedure may seem a bit involved, but we believe this is one of the most practical ways to calibrate narrow field-of-view cameras with high accuracy. By leveraging a more easily calibrated camera to survey the target field, we can create a rich object space that allows us to accurately calibrate cameras that would otherwise be very challenging to handle. Don't think it makes a difference? Try for yourself: modify the manifest to skip the surveying phase and use our regular ol' object space for the narrow field-of-view calibration. Let us know how it goes! --- # Single Camera Calibration Guide MetriCal's camera calibration is a joint process, which includes calibrating intrinsics and extrinsics at the same time. This guide provides tips and best practices specifically for single camera calibration. ## Common Single Camera Workflows[​](#common-single-camera-workflows "Direct link to Common Single Camera Workflows") ### Seeding Larger Calibrations[​](#seeding-larger-calibrations "Direct link to Seeding Larger Calibrations") Single camera intrinsics calibration is often the first step in a multi-stage calibration process. When dealing with complex rigs where cameras are mounted in hard-to-reach positions (e.g., several feet off the ground), it can be beneficial to: 1. Calibrate each camera's intrinsics individually using optimal data for that camera 2. Use these individual calibrations to seed a larger multi-camera or sensor fusion calibration This approach allows you to get up close to each camera individually to fully cover its field of view, which might not be possible when capturing data for the entire rig simultaneously. The resulting calibration file from this process can be used as input to subsequent calibrations using the [`--reference-source`](/metrical/commands/calibration/init.md#-p---reference-source-reference_source) argument in the Init command: ``` [stages.first-calibration] command = "calibrate" dataset = "{{variables.single-cam-dataset}}" input-plex = "{{first-init.initialized-plex}}" ... [stages.second-init] command = "init" dataset = "{{variables.second-dataset}}" # >>> Use camera cal from first stages <<< reference-source = ["{{first-calibration.results}}"] ``` See the [Multi-Camera Calibration guide](/metrical/calibration_guides/multi_camera_cal.md) for details on how to combine multiple single-camera calibrations. ## Example Dataset and Manifest[​](#example-dataset-and-manifest "Direct link to Example Dataset and Manifest") We've captured an example of a good single camera calibration dataset that you can use to test out MetriCal. If it's your first time performing a single camera calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like. This dataset features: * Observations as a [folder of folders](/metrical/configuration/data_formats.md) * Two infrared cameras * One color camera * One [markerboard target](/metrical/targets/camera_targets.md) [📁Download: Camera Example Dataset](https://drive.google.com/drive/folders/1xQDVBTW41xlnZfREliCuvEXFchaXwTr1?usp=drive_link) ## The Manifest[​](#the-manifest "Direct link to The Manifest") camera\_only\_cortland\_manifest\_single\_cam.toml ``` [project] name = "MetriCal Demo: Single Camera Manifest" version = "15.0.0" description = "Manifest for running MetriCal on a camera dataset." workspace = "metrical-results" ## === VARIABLES === [project.variables.dataset] description = "Path to the input dataset containing camera and lidar data." value = "camera_only_cortland_data/" [project.variables.object-space] description = "Path to the input object space JSON file." value = "camera_only_cortland_objects.json" ## === STAGES === [stages.cam-init] command = "init" dataset = "{{variables.dataset}}" topic-to-model = [["09*", "no-distortion"]] overwrite-strategy = "replace" ... # ...more options... initialized-plex = "{{auto}}" [stages.cam-calibrate] command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{cam-init.initialized-plex}}" input-object-space = "{{variables.object-space}}" camera-motion-threshold = "disabled" overwrite-detections = true render = true ... # ...more options... detections = "{{auto}}" results = "{{auto}}" ``` Before running the manifest, let's take note of a couple things: * Our overwrite strategy for our `cam-init` stage is set to `replace`, and we're also overwriting detections in the `cam-calibrate` stage. If you go through the [multi-camera calibration guide](/metrical/calibration_guides/multi_camera_cal.md), which uses the same dataset, you'll see that `preserve`-ing your initialized plex will also keep prior detections. This means we'll only get one camera worth of detections in our multi-camera calibration! * Our second stage is rendered. This flag will allow us to watch the detection phase of the calibration as it happens in real time. This can have a large impact on performance, but is invaluable for debugging data quality issues. Match Rerun Versions MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun installed on your machine to use the `--render` flag. Please ensure that you've followed the [visualization configuration instructions](/metrical/configuration/visualization.md) before running this manifest. ### Running the Manifest[​](#running-the-manifest "Direct link to Running the Manifest") With a copy of the dataset downloaded and the manifest file created, you should be ready to roll: ``` metrical run camera_only_cortland_manifest_single_cam.toml ``` ![A screenshot of the MetriCal detection visualization](/assets/images/single_cam_visualization-c2c7649ec7c80af19c9c22c9400bc153.png) While the calibration is running, take specific note of the target motion patterns, presence of still periods, and breadth of camera coverage. When it comes time to design a motion sequence for your own systems, try to apply any learnings you take from watching this capture. When the run finishes, you'll be left with three artifacts: * `initialized-plex.json`: Our initialized plex from the first stage. * `report.html`: a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report [here.](/metrical/results/report.md) * `results.mcap`: a file containing the final calibration and various other metrics. You can learn more about results [here](/metrical/results/output_file.md) and about manipulating your results using `shape` commands [here](/metrical/commands/calibration/shape/shape_overview.md). ## Data Capture Guidelines[​](#data-capture-guidelines "Direct link to Data Capture Guidelines") ### Best Practices[​](#best-practices "Direct link to Best Practices") | DO | DON'T | | ------------------------------------------------------------------------------------------- | -------------------------------------------------------------- | | ✅ Keep targets in focus - use a lens focused at infinity when possible. | ❌ Capture blurry or out-of-focus images. | | ✅ Capture the target across the entire field of view, especially near the edges. | ❌ Only place the target in a small part of the field of view. | | ✅ Rotate the target 90° for some captures (or rotate the camera if target is fixed). | ❌ Keep the target in only one orientation. | | ✅ Capture the target from various angles to maximize convergence (aim for 70° or greater). | ❌ Only capture the target from similar angles. | | ✅ Make the target appear as large as possible in the image. | ❌ Keep the target too far away from the camera. | | ✅ Pause between poses to avoid motion blur. | ❌ Move the target or camera continuously during capture. | ### Keep Targets in Focus[​](#keep-targets-in-focus "Direct link to Keep Targets in Focus") This tip is an absolute must when capturing data with cameras. Data that is captured out-of-focus breaks underlying assumptions that are made about the relationship between the image projection and object space; this, in turn, breaks the projection model entirely! A lens focused at infinity captures objects far away with ease without defocusing, so this is the recommended setting for calibration. Care should be taken, therefore, not to get a target too *close* to a lens that it blurs out the image. This near-to-far range in which objects are still focused is called a camera's [*depth of field*](//en.wikipedia.org/wiki/Depth_of_field). Knowing your depth of field can ensure you never get a blurry image in your data. It should be noted that a lens with a shorter focal length, i.e. wide field of view, tends to stay in focus over larger depths of field. ### Consider Your Target Orientations[​](#consider-your-target-orientations "Direct link to Consider Your Target Orientations") One of the largest sources of projective compensation comes from x and y correlations in the image space observations. All of these effects are especially noticeable when there is little to no depth variation in the target field. It is almost always helpful to collect data where the target field is collected at 0° and 90° orientations: ![Rotating a target by 90°](/assets/images/rotate_target-fd1bf73ca42b0dfe9e355481a276c251.png) When the object space is captured at 0°: * x image measurements directly measure X in object space * y image measurements directly measure Y in object space When the object space is captured at 90°: * x image measurements directly measure Y in object space * y image measurements directly measure X in object space This process de-correlates errors in x and y, because small errors in the x and y image measurements are statistically independent. There is no need to collect more data beyond 0° and 90° rotations; the two orientations alone do enough. Helping You Help Yourself Note that the same trick can be applied if the *target* is static, and the camera is rotated relative to the targets instead. ### Consider the Full Field Of View[​](#consider-the-full-field-of-view "Direct link to Consider the Full Field Of View") **The bigger the target is in the image, the better**. A common data capture mistake is to only place the target in a small part of the field of view of the camera. This makes it extremely difficult to model radial distortions, especially if the majority of the data is in the center of the extent of the image. To mitigate this, object points should be observed across as much of the camera's field of view as is possible. ![Good capture vs. Bad capture](/assets/images/good_capture-458fcbf2012259c71b5ab74656521943.png) It is especially important to get image observations with targets near the periphery of the image, because this where distortion is the greatest, and where it needs to be characterized the best. As a good rule-of-thumb, a ratio of about 1:1 is good for the target width (Xc​) compared to the distance to the target (Zc​). For a standard 10×7 checkerboard, this would mean: * 10 squares in the X-direction, each of length 0.10m * 7 squares in the Y-direction of length 0.10m * Held 1m away from the camera during data collection This gives a ratio of 1:1 in X, and 7:10 in Y. ![Good target ratio](/assets/images/target_ratio-6327e69a45dd30f0c7322af0f3953a9c.png) In general, increasing the size of the target field is preferred to moving too close to the camera (see "Keep Targets In Focus" above). However, both can be useful in practice. ![Adjusting your target to get good coverage](/assets/images/target_size-fefaf301152a0d94acb9ff325274a25d.png) ### Maximize the Convergence Angle[​](#maximize-the-convergence-angle "Direct link to Maximize the Convergence Angle") The convergence angle of the camera's pose relative to the object space is a major factor in the determination of our focal length f, among other parameters. The more the angle changes in a single dataset, the better; In most scenarios, reaching a convergence angle of 70° or greater is recommended. ![top-down angles](/assets/images/top_down_angle-1ba9559f10bf6e5bfa75573839e396c8.png) ![side angles](/assets/images/side_angle-421cc2575a9bf5cb9f07a2ae363fc663.png) It is worth noting that other data qualities shouldn't be sacrificed for better angles. It is still important for the image to be in focus and for the targets to be observable. ## Advanced Considerations[​](#advanced-considerations "Direct link to Advanced Considerations") ### The Importance of Depth Variation[​](#the-importance-of-depth-variation "Direct link to The Importance of Depth Variation") note This point is more of a consideration than a requirement. At the very least, it should serve to provide you with more intuition about the calibration data capture process. Using a single, flat planar target provides very little variation in object space depth Z. Restricting object space to a single plane introduces projective compensation in all sorts of ways: * f to all object space Z coordinates * p1​ and p2​ to both f and extrinsic rotations about X and Y (for Brown-Conrady) * f to k1​ through k4​ (for Kannala-Brandt) A **non-planar target**, or a combination of targets using multiple object spaces helps to mitigate this effect by adding *depth variation* in Z. In general, more depth variation is better. For those with only a single, planar calibration targets — know that MetriCal can *still* give great calibration results given the other data capture guidelines are followed. ## Troubleshooting[​](#troubleshooting "Direct link to Troubleshooting") If you encounter errors during calibration, please refer to our [Errors and Troubleshooting](/metrical/commands/command_errors.md) documentation. Remember that all measurements for your targets should be in meters, and you should ensure visibility of as much of the target as possible when collecting data. --- # Camera Models Below are all supported camera intrinsics models in MetriCal. If there is a model that you use that is not listed here, just [contact us](/metrical/support_and_admin/contact.md)! We're always looking to expand our support. ## Common Variables and Definitions[​](#common-variables-and-definitions "Direct link to Common Variables and Definitions") | Variables | Description | | ------------------------- | ------------------------------------------------------------------------------ | | xc​, yc​ | Pixel coordinates in the image plane, with origin at the principal point | | X, Y, Z | Feature coordinates in the world, in 3D Euclidean Space | | X^, Y^, Z^ | Corrected camera ray, in homogeneous coordinates centered on the camera origin | | X^dist​, Y^dist​, Z^dist​ | Distorted camera ray, in homogeneous coordinates centered on the camera origin | **Modeling** (3D → 2D) refers to projecting a 3D point in the world to a 2D point in the image, given the intrinsics provided. In other words, it *models* the effect of distortion on a 3D point. This is also known as "forward projection". **Correcting** (2D → 3D) refers to the process of finding the camera ray that is created when intrinsics are applied to a 2D point in the image. When "undistorting" a pixel, this can be thought of as finding the corrected ray's point of intersection with the image plane. In other words, it *corrects* for the effect of distortion. This is also known as "inverse projection". **Unified** refers to a model that can be used to both model and correct for distortion. ## Camera Model Descriptions[​](#camera-model-descriptions "Direct link to Camera Model Descriptions") * No Distortion * OpenCV RadTan * OpenCV Fisheye * OpenCV Rational * Pinhole with Brown-Conrady * Pinhole with Kannala-Brandt * EUCM * Double Sphere * Omnidirectional * Power Law ### No Distortion[​](#no-distortion "Direct link to No Distortion") MetriCal keyword: `no-distortion` This model is a classic pinhole projection with no distortion or affine effects. This model is most applicable when you're already correcting your images with a rectification process, or when you're using a camera with a very low distortion profile. | Parameter | Description | | --------- | ------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | De facto, nearly any camera that is already corrected for distortion uses this model. All models on this page are either pinhole models by design, or degrade to a pinhole model when no distortion is present. xc​yc​​=fZX​+cx​=fZY​+cy​​​ ### OpenCV RadTan[​](#opencv-radtan "Direct link to OpenCV RadTan") Original Reference OpenCV.org. Camera Calibration and 3D Reconstruction documentation. OpenCV 4.10-dev. MetriCal keyword: `opencv-radtan` Type: **Modeling** This is based on OpenCV's default distortion model, which is a modified Brown-Conrady model. If you've ever used OpenCV, you've most certainly used this. | Parameter | Description | | --------- | --------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | p1​ | first tangential distortion term | | p2​ | second tangential distortion term | ### Common Use Cases[​](#common-use-cases "Direct link to Common Use Cases") OpenCV's adoption of this model has made it the de facto starting point for most calibration tasks. However, this does *not* mean it's multi-purpose. OpenCV RadTan is best suited for cameras with a field of view of 90° or less. A good number of sensor packages use this model, including: | Model | Cameras With Model | | --------------------- | ------------------ | | Intel RealSense D435 | All cameras | | Intel RealSense D435i | All cameras | | Intel RealSense D455 | All cameras | ### Modeling[​](#modeling "Direct link to Modeling") X′Y′r2r′Xdist​Ydist​xc​yc​​=X/Z=Y/Z=X′2+Y′2=(1+k1​r2+k2​r4+k3​r6)=r′X′+2p1​X′Y′+p2​(r2+2X′2)=r′Y′+2p2​X′Y′+p1​(r2+2Y′2)=fXdist​+cx​=fYdist​+cy​​​ ### Correcting[​](#correcting "Direct link to Correcting") Correcting for OpenCV RadTan is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal. ### OpenCV Fisheye[​](#opencv-fisheye "Direct link to OpenCV Fisheye") Original Reference OpenCV.org. Fisheye camera model documentation. OpenCV 4.10-dev. MetriCal keyword: `opencv-fisheye` Type: **Modeling** This model is based on OpenCV's Fisheye lens model, which is a modified Kannala-Brandt model. It has no tangential distortion terms, but is robust to wide-angle lens distortion. | Parameter | Description | | --------- | ----------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | k4​ | fourth radial distortion term | ### Common Use Cases[​](#common-use-cases-1 "Direct link to Common Use Cases") Any camera with a fisheye lens below 140° diagonal field of view will probably benefit from this model. ### Modeling[​](#modeling-1 "Direct link to Modeling") X′Y′r2θθd​Xdist​Ydist​xc​yc​​=X/Z=Y/Z=X′2+Y′2=atan(r)=θ(1+k1​θ2+k2​θ4+k3​θ6+k4​θ8)=(rθd​​)X′=(rθd​​)Y′=fXdist​+cx​=fYdist​+cy​​​ ### Correcting[​](#correcting-1 "Direct link to Correcting") Correcting for OpenCV Fisheye is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal. ### OpenCV Rational[​](#opencv-rational "Direct link to OpenCV Rational") Original Reference OpenCV.org. Camera Calibration and 3D Reconstruction documentation. OpenCV 4.10-dev. MetriCal keyword: `opencv-rational` Type: **Modeling** OpenCV Rational is the full distortion model used by OpenCV. It is an extension of the RadTan model, in which the radial distortion is modeled as a rational function. | Parameter | Description | | --------- | --------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | p1​ | first tangential distortion term | | p2​ | second tangential distortion term | | k4​ | fourth radial distortion term | | k5​ | fifth radial distortion term | | k6​ | sixth radial distortion term | ### Common Use Cases[​](#common-use-cases-2 "Direct link to Common Use Cases") This model is mostly equivalent to OpenCV RadTan. Given the number of parameters, this model can overfit on the available data, making it tricky to generalize. However, if you know your camera benefits from this specification, it does offer additional flexibility. As this is a modified version of OpenCV RadTan, we recommend its use for lenses with a field of view of 90° or less. ### Modeling[​](#modeling-2 "Direct link to Modeling") X′Y′r2r′Xdist​Ydist​xc​yc​​=X/Z=Y/Z=X′2+Y′2=1+k4​r2+k5​r4+k6​r61+k1​r2+k2​r4+k3​r6​=r′X′+2p1​X′Y′+p2​(r2+2X′2)=r′Y′+2p2​X′Y′+p1​(r2+2Y′2)=fXdist​+cx​=fYdist​+cy​​​ ### Correcting[​](#correcting-2 "Direct link to Correcting") Correcting for OpenCV Rational is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal. ### Pinhole with (Inverse) Brown-Conrady[​](#pinhole-with-inverse-brown-conrady "Direct link to Pinhole with (Inverse) Brown-Conrady") Original Publication A. E. Conrady, Decentred Lens-Systems, Monthly Notices of the Royal Astronomical Society, Volume 79, Issue 5, March 1919, Pages 384–390, MetriCal keyword: `pinhole-with-brown-conrady` Type: **Correcting** This model is the first of our *inverse* models. These models *correct* for distortion terms in the image space, rather than *modeling* the effects of distortion in the world space. If you're just looking to correct for distortion, this could be the model for you! Notice that the model parameters are identical to those found in OpenCV RadTan. This is because both models are based off of the Brown-Conrady approach to camera modeling. Don't mix up one for the other! | Parameter | Description | | --------- | --------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | p1​ | first tangential distortion term | | p2​ | second tangential distortion term | ### Common Use Cases[​](#common-use-cases-3 "Direct link to Common Use Cases") If you're just correcting for distortion, rather than modeling it, this model is a good choice. ### Modeling[​](#modeling-3 "Direct link to Modeling") Modeling distortion for Inverse Brown-Conrady is a non-linear process. The most common method is to run a non-linear optimization to find the distorted point. This is the method used in MetriCal. ### Correcting[​](#correcting-3 "Direct link to Correcting") r2r′xcorr​ycorr​X^Y^Z^​=xc2​+yc2​=(k1​r2+k2​r4+k3​r6)=r′xc​+p1​(r2+2xc2​)⋅2p2​xc​yc​=r′yc​+p2​(r2+2yc2​)+2p1​xc​yc​=xc​−xcorr​=yc​−ycorr​=f​​ ### Pinhole with (Inverse) Kannala-Brandt[​](#pinhole-with-inverse-kannala-brandt "Direct link to Pinhole with (Inverse) Kannala-Brandt") Original Publication J. Kannala and S. S. Brandt, "A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1335-1340, Aug. 2006, doi: 10.1109/TPAMI.2006.153.  MetriCal keyword: `pinhole-with-kannala-brandt` Type: **Correcting** Inverse Kannala-Brandt follows the same paradigm as our other Inverse models: it *corrects* distortion in image space, rather than *modeling* it in world space. Inverse Kannala-Brandt is close to the original Kannala-Brandt model, and therefore shares the same set of distortion parameters as OpenCV Fisheye. Don't get them mixed up! | Parameter | Description | | --------- | ----------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | k3​ | third radial distortion term | | k4​ | fourth radial distortion term | ### Common Use Cases[​](#common-use-cases-4 "Direct link to Common Use Cases") If you're just correcting for distortion, rather than modeling it, this model is a good choice for lenses with a field of view of 140° or less. ### Modeling[​](#modeling-4 "Direct link to Modeling") Modeling distortion for Inverse Kannala-Brandt is a non-linear process. The most common method is to run a non-linear optimization to find the distorted point. This is the method used in MetriCal. ### Correction[​](#correction "Direct link to Correction") r2θr′xcorr​ycorr​X^Y^Z^​=xc2​+yc2​=atan(fr​)=θ(1+k1​θ2+k2​θ4+k3​θ6+k4​θ8)=r′(rxc​​)=r′(ryc​​)=xc​−xcorr​=yc​−ycorr​=f​​ ### EUCM[​](#eucm "Direct link to EUCM") Original Publication Khomutenko, B., Garcia, G., & Martinet, P. (2016). An Enhanced Unified Camera Model. IEEE Robotics and Automation Letters, 1(1), 137–144. doi:10.1109/lra.2015.2502921. MetriCal keyword: `eucm` Type: **Unified** EUCM stands for *Enhanced Unified Camera Model* (aka *Extended Unified Camera Model*), and is a riff on the *Unified Camera Model*. This model is "unified" because it offers direct calculation of both modeling and correcting for distortion. At lower distortion levels, this model naturally degrades into a pinhole model. This model is also unique in that it primarily operates on camera rays, rather than requiring a certain focal length or pixel distance to mathematically operate. The Modeling and Correction operations below will convert camera rays to — and from — a distorted state. Users who wish to operate in the image plane should convert these homogeneous coordinates to a Z^ matching the focal length. | Parameter | Description | | --------- | ---------------------------------------------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | α | The distortion parameter that relates ellipsoid and pinhole projection | | β | The distortion parameter that controls the ellipsoid shape | ### Common Use Cases[​](#common-use-cases-5 "Direct link to Common Use Cases") EUCM is (at time of writing) becoming a popular choice for cameras with strong radial distortion. Its ability to model distortion in a way that is both accurate and efficient makes it a good choice for many applications. It is capable of handling distortions for lenses with a field of view greater than 180°. ### Modeling[​](#modeling-5 "Direct link to Modeling") dγsX^dist​Y^dist​Z^dist​xc​yc​​=(β(X^2+Y^2)+Z^2)​=1−α=αd+γZ^1​=X^s=Y^s=1=fX^dist​+cx​=fY^dist​+cy​​​ ### Correction[​](#correction-1 "Direct link to Correction") (mx​,my​)r2γmz​sX^Y^Z^​=(Z^dist​X^dist​​,Z^dist​Y^dist​​)=mx2​+my2​=1−α=α1−(αγ)βr2​+γ1−βα2r2​=r2+mz2​​1​=mx​s=my​s=mz​s​​ ### Double Sphere[​](#double-sphere "Direct link to Double Sphere") Original Publication Usenko, V., Demmel, N., & Cremers, D. (2018). The Double Sphere Camera Model. 2018 International Conference on 3D Vision (3DV). doi:10.1109/3dv.2018.00069  MetriCal keyword: `double-sphere` Type: **Unified** Double Sphere is the newest model on this page. Like EUCM, it also offers direct computation of both modeling and correction on camera rays. However, it uses two spheres to model the effects of even strong radial distortion. At lower distortion levels, this model naturally degrades into a pinhole model. | Parameter | Description | | --------- | -------------------------------------------------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | ξ | The distortion parameter corresponding to distance between spheres | | α | The distortion parameter that relates second sphere and pinhole projection | ### Common Use Cases[​](#common-use-cases-6 "Direct link to Common Use Cases") Its use of projection through its "double spheres" makes it an ideal model for ultra-wide field of view lenses. The lenses tested in its original publication had fields of view ranging from 122° to 195°! ### Modeling[​](#modeling-6 "Direct link to Modeling") d1​Zmod​d2​sX^dist​Y^dist​Z^dist​xc​yc​​=X^2+Y^2+Z^2​=ξd1​+Z^=X^2+Y^2+Zmod2​​=αd2​+(1−α)Zmod​1​=X^s=Y^s=1=fX^dist​+cx​=fY^dist​+cy​​​ ### Correcting[​](#correcting-4 "Direct link to Correcting") (mx​,my​)r2mz​sX^Y^Z^​=(Z^dist​X^dist​​,Z^dist​Y^dist​​)=mx2​+my2​=α1−(2α−1)r2​+(1−α)(1−α2r2)​=mz2​+r2mz​ξ+mz2​+(1−ξ2)r2​​=mx​s=my​s=mz​s−ξ​​ ### Omnidirectional (Omni)[​](#omnidirectional-omni "Direct link to Omnidirectional (Omni)") Original Publication Mei, C. and Rives, P. Single View Point Omnidirectional Camera Calibration from Planar Grids. Proceedings 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 2007, pp. 3945-3950, doi: 10.1109/ROBOT.2007.364084. MetriCal keyword: `omni` Type: **Modeling** The Omnidirectional camera model is designed to handle the unique distortion profile of a *catadioptric* camera system, or one that combines mirrors and lenses together. These systems are often used for camera systems that seek to capture a full 360° field of view (or close to it). In "[A Unifying Theory for Central Panoramic Systems and Practical Implications](https://link.springer.com/content/pdf/10.1007/3-540-45053-X_29.pdf)" (Geyer, Daniilidis), the authors show that all mirror surfaces can be modeled with a projection from an imaginary unit sphere onto a plane perpendicular to the sphere center and the conic created by the mirror. The Omnidirectional camera model codifies this relationship mathematically. Yes, it's a little confusing We won't go into it here, but the papers linked above explain it well. The Omnidirectional camera model also implements the same radial and tangential distortion terms as [OpenCV RadTan](#opencv-radtan). However, while OpenCV RadTan uses 3 radial distortion terms, this only uses 2. The reason for this? Everyone else did it (even the authors' original implementation), so now it's convention. | Parameter | Description | | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | f | generalized focal length (px). This term is not a "true" focal length, but rather the camera focal length scaled by a collinear factor η that represents the effect of the mirror | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | ξ | the distance between the unit sphere and the "projection" sphere upon which the focal plane is projected | | k1​ | first radial distortion term | | k2​ | second radial distortion term | | p1​ | first tangential distortion term | | p2​ | second tangential distortion term | ### Common Use Cases[​](#common-use-cases-7 "Direct link to Common Use Cases") The omnidirectional camera model is best used for extreme fields of view. Anything greater than 140° would be well-served by this model. ### Modeling[​](#modeling-7 "Direct link to Modeling") XXS​XC​X′Y′r2r′Xdist​Ydist​xc​yc​​=\[X,Y,Z]=∥X∥X​=\[XS​,YS​,ZS​+ξ]=(ZC​XC​​)=(ZC​YC​​)=X′2+Y′2=1+k1​r2+k2​r4=r′X′+2p1​X′Y′+p2​(r2+2X′2)=r′Y′+2p2​X′Y′+p1​(r2+2Y′2)=fXdist​+cx​=fYdist​+cy​​​ ### Correcting[​](#correcting-5 "Direct link to Correcting") Though the Omnidirectional model technically has a unified inversion, the introduction of the radial and tangential distortion means that correcting is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal. ### Power Law[​](#power-law "Direct link to Power Law") Original Reference This is an internal model developed by Tangram Vision for modeling distortion with a simple power law. MetriCal keyword: `power-law` Type: **Modeling** The Power Law camera model describes distortion as a power law, with distortion increasing exponentially according to the `alpha` term. The `beta` term is a linear parameter on the radial distance. This model is designed to be simple yet effective for many camera types, particularly for lenses with moderate to strong radial distortion. The full form of the model is: ru′v′​=u2+v2​=u−β⋅rα⋅ru​=v−β⋅rα⋅rv​​​ When simplified, the distortion profile itself can be expressed as: dist\_radial​=1−β⋅rα−1​​ | Parameter | Description | | --------- | ------------------------------------------------- | | f | focal length (px) | | cx​ | principal point in x (px) | | cy​ | principal point in y (px) | | α | exponential distortion parameter (must be >= 1.0) | | β | linear distortion parameter | ### Common Use Cases[​](#common-use-cases-8 "Direct link to Common Use Cases") The Power Law model provides a nice balance between simplicity and expressiveness. It can handle a wide range of camera distortion profiles with just two parameters. It works well for: * Cameras with moderate to strong radial distortion * Systems where computational efficiency is important * Cases where you need a simple model that can still handle non-linear distortion effects ### Modeling[​](#modeling-8 "Direct link to Modeling") X′Y′rdist\_radialXdist​Ydist​xc​yc​​=X/Z=Y/Z=X′2+Y′2​=1−β⋅rα−1=X′⋅dist\_radial=Y′⋅dist\_radial=f⋅Xdist​+cx​=f⋅Ydist​+cy​​​ ### Correcting[​](#correcting-6 "Direct link to Correcting") Correcting for the Power Law model is a non-linear process, requiring iterative optimization. MetriCal uses a Levenberg-Marquardt algorithm to find the corrected point, since this model can be sensitive at certain values of alpha and beta. --- # IMU Models Below are all supported IMU intrinsics models in MetriCal. If there is a model that you use that is not listed here, just [contact us](/metrical/support_and_admin/contact.md)! We're always looking to expand our support. ## Common Variables and Definitions[​](#common-variables-and-definitions "Direct link to Common Variables and Definitions") | Variables | Description | | --------- | ------------------------------------------------------------------------------------------------------- | | ωi | Corrected (Calibrated) angular velocity. | | fi | Corrected (Calibrated) specific force. | | ω^g | Distorted (Uncalibrated) gyroscope measurement. | | f^​a | Distorted (Uncalibrated) accelerometer measurement. Also referred to as the specific force measurement. | **Specific Force** is the mass-specific force experienced by the proof mass in an accelerometer. This is commonly known as the accelerometer measurement. It differs from acceleration in that it has an additive term due to gravity operating on the proof mass in the accelerometer. To convert between acceleration of the accelerometer and specific-force, one can apply the equation fi=ai+gi​​ where gi is the gravity vector expressed in the IMU's frame. **Modeling** (Calibrated → Uncalibrated) refers to transforming an ideal angular velocity, or specific force and modeling the uncalibrated measurement produced by an IMU sensor given the intrinsics provided. In other words, it *models* the effect of the intrinsics on the angular velocity or specific force experienced by the IMU. **Correcting** (Uncalibrated → Calibrated) refers to transforming an uncalibrated gyroscope or accelerometer measurement produced by an IMU sensor and correcting for intrinsics effects using the intrinsics provided. In other words, it *corrects* the measurement produced by an IMU approximating the true angular velocity or specific force experienced by the IMU. Correcting Models All the IMU intrinsic models used in MetriCal are correcting models in that they naturally express the corrected quantities in terms of the measured quantities. However, these models may also be used to model the IMU intrinsics. This simply requires inverting the model by solving for the measured quantities in terms of the corrected quantities. The modeling equations for each of the intrinsic models are given below. ## IMU Frames[​](#imu-frames "Direct link to IMU Frames") A coordinate frame is simply the position and directions defining the basis which can be used to numerically express a quantity. The coordinate frames used for IMU models are given in the following table. | Frame | Description | | ----- | ---------------------------------- | | a | The accelerometer coordinate frame | | g | The gyroscope coordinate frame | | i | The imu coordinate frame | In the currently supported IMU models in MetriCal, the a, g, and i coordinate frames are assumed to be at the same position in space. Furthermore, the a and g frames are not assumed to be orthonormal. The i frame, however, will always be a proper orthonormal coordinate frame. Numerical quantities use a superscript to describe their frame of reference. As such, the above frame variables should only be interpreted as a frame if they appear as a superscript. For example, the gravity vector in the IMU's frame is given as gi, but g should not be interpreted as a frame in this context. ## IMU Bias[​](#imu-bias "Direct link to IMU Bias") Recommended Reading Woodman, O. (2007). An introduction to inertial navigation. [Read here](https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-696.pdf). Farrell, J. Silva, F. Rahman, F. Wendel, J. (2022). IMU Error Modeling Tutorial. IEEE Control Systems Magazine, Vol 42, No 6, pp. 40-66. [Read here](https://escholarship.org/content/qt1vf7j52p/qt1vf7j52p_noSplash_750bb4e8a68d04f2d577450b1bf56572.pdf). IMUs have a time-varying bias on the measurements they produce. In MetriCal, this is modeled as an additive bias on top of the intrinsically distorted measurements. | Parameter | Mathematical Notation | Description | | --------------------- | --------------------- | ----------------------------------------------- | | `gyro_bias` | bfa​ | Additive bias on the accelerometer measurement. | | `specific_force_bias` | bgg​ | Additive bias on the gyroscope measurement. | By necessity, MetriCal infers the additive biases on both the accelerometer and the gyroscope measurements. However, the IMU bias is a time-varying quantity that is highly influenced by electrical noise, temperature fluctuations and other factors. These biases can change simply by power cycling the IMU sensor. As such, it is recommended that the IMU biases inferred by MetriCal are only used as an initial estimate of bias in applications. ## IMU Noise[​](#imu-noise "Direct link to IMU Noise") Noise Specification A full understanding of the IMU noise model is not necessary to use MetriCal. MetriCal chooses default values for the noise model that should work with most MEMS consumer grade IMUs. However, the interested reader is referred to the [IEEE Specification](https://ieeexplore.ieee.org/document/660628) Appendix C, where these values are more rigorously defined. In addition to bias and intrinsics, IMUs experience noise on their measurement outputs. This noise must be modeled to properly infer the bias and intrinsic parameters of an IMU. At a high-level, the noise model is given as x(t)=y(t)+n(t)+b(t)+k(t) where * x(t) is the noisy time-varying signal * y(t) is the true time-varying signal * n(t) is the component of the noise with constant power spectral density (white noise) * b(t) is the component of the noise with 1/f power spectral density (pink noise) * k(t) is the component of the noise with 1/f2 power spectral density (brown noise) This noise model is applied to each component of the gyroscope and accelerometer output. More details on this noise model can be found in the [IEEE Specification](https://ieeexplore.ieee.org/document/660628). ### Noise Parameters[​](#noise-parameters "Direct link to Noise Parameters") The noise parameters for the IMU noise model are given as | Parameter | Units | Description | | ----------------------------------------- | ---------- | -------------------------------------------------------------------- | | `angular_random_walk` | rad/s​ | Angular random walk coefficients | | `gyro_bias_instability` | rad/s | Gyroscope bias instability coefficients | | `gyro_rate_random_walk` | rad/s(3/2) | Gyroscope rate random walk coefficients | | `gyro_correlation_time_constant` | unitless | Gyroscope correlation time constant for bias instability process | | `gyro_turn_on_bias_uncertainty` | rad/s | Prior standard deviation of the gyroscope bias components | | `velocity_random_walk` | m/s(3/2) | Velocity random walk coefficients | | `accelerometer_bias_instability` | m/s2 | Accelerometer bias instability coefficients | | `accelerometer_rate_random_walk` | m/s(5/2) | Accelerometer rate random walk coefficients | | `accelerometer_correlation_time_constant` | unitless | Accelerometer correlation time constant for bias instability process | | `accelerometer_turn_on_bias_uncertainty` | rad/s | Prior standard deviation of the specific force bias components | MetriCal will automatically set these with reasonable values for consumer grade IMUs (like the BMI055 used in Intel's Realsense D435i). However, if you have better values from an IMU datasheet or an Allan Variance analysis, you can modify the defaults to better suit your needs. If you are using values given in a datasheet, be sure to convert to the units used by MetriCal, since many datasheets will give units like mg/s​ (milli-g's) or (∘/hour)/Hz​ ## IMU Model Descriptions[​](#imu-model-descriptions "Direct link to IMU Model Descriptions") * Scale * Scale and Shear * Scale, Shear, and Rotation * Scale, Shear, Rotation, and G-sensitivity ### Scale[​](#scale "Direct link to Scale") MetriCal keyword: `scale` This model corrects any scale factor errors in the accelerometer and gyroscope measurements. | Parameter | Mathematical Notation | Description | | ---------------------- | --------------------- | ------------------------------------------------------------------ | | `gyro_scale` | \[sgx​,sgy​,sgz​] | Scaling applied to each component of the gyroscope measurement | | `specific_force_scale` | \[sfx​,sfy​,sfz​] | Scaling applied to each component of the accelerometer measurement | Where the intrinsic correction matrices are formed from the other variables as Dg​Df​​=​sgx​00​0sgy​0​00sgz​​​=​sfx​00​0sfy​0​00sfz​​​​ #### Correction[​](#correction "Direct link to Correction") ωifi​=Dg​(ω^g−bgg​)=Df​(f^​a−bfa​)​ #### Modeling[​](#modeling "Direct link to Modeling") ω^gf^​a​=Dg−1​ωi+bgg​=Df−1​fi+bfa​​ ### Scale and Shear[​](#scale-and-shear "Direct link to Scale and Shear") MetriCal keyword: `scale-shear` This model corrects any scale factor errors and non-orthogonality errors in the accelerometer or gyroscope measurements. These non-orthogonality errors are often called "shear" errors which gives the scale and shear model its name. | Parameter | Mathematical Notation | Description | | ---------------------- | --------------------- | ----------------------------------------------------------------------- | | `gyro_scale` | \[sgx​,sgy​,sgz​] | Scaling applied to each component of the gyroscope measurement | | `specific_force_scale` | \[sfx​,sfy​,sfz​] | Scaling applied to each component of the accelerometer measurement | | `gyro_shear` | \[σgxy​,σgxz​,σgyz​] | Non-orthogonality compensation applied to the gyroscope measurement | | `accelerometer_shear` | \[σfxy​,σfxz​,σfyz​] | Non-orthogonality compensation applied to the accelerometer measurement | Where the intrinsic correction matrices are formed from the other variables as Dg​Df​​=​sgx​00​σgxy​sgy​0​σgxz​σgyz​sgz​​​=​sfx​00​σfxy​sfy​0​σfxz​σfyz​sfz​​​​ #### Correction[​](#correction-1 "Direct link to Correction") ωifi​=Dg​(ω^g−bgg​)=Df​(f^​a−bfa​)​ #### Modeling[​](#modeling-1 "Direct link to Modeling") ω^gf^​a​=Dg−1​ωi+bgg​=Df−1​fi+bfa​​ ### Scale, Shear, and Rotation[​](#scale-shear-and-rotation "Direct link to Scale, Shear, and Rotation") MetriCal keyword: `scale-shear-rotation` This model corrects any scale factor errors and non-orthogonality errors in the accelerometer or gyroscope measurements. Additionally, this model corrects for any rotational misalignment between the gyroscope and the IMU's frame. In this model, the IMU frame is assumed to be the same as the scale and shear corrected accelerometer frame. | Parameter | Mathematical Notation | Description | | ---------------------- | --------------------- | ----------------------------------------------------------------------- | | `gyro_scale` | \[sgx​,sgy​,sgz​] | Scaling applied to each component of the gyroscope measurement | | `specific_force_scale` | \[sfx​,sfy​,sfz​] | Scaling applied to each component of the accelerometer measurement | | `gyro_shear` | \[σgxy​,σgxz​,σgyz​] | Non-orthogonality compensation applied to the gyroscope measurement | | `accelerometer_shear` | \[σfxy​,σfxz​,σfyz​] | Non-orthogonality compensation applied to the accelerometer measurement | | `accel_from_gyro_rot` | Rgi​ | The rotation from the gyroscope frame to the IMU's frame | The intrinsic correction matrices are formed with the two equations Dg​Df​​=​sgx​00​σgxy​sgy​0​σgxz​σgyz​sgz​​​=​sfx​00​σfxy​sfy​0​σfxz​σfyz​sfz​​​​ #### Correction[​](#correction-2 "Direct link to Correction") ωifi​=Rgi​Dg​(ω^g−bgg​)=Df​(f^​a−bfa​)​ #### Modeling[​](#modeling-2 "Direct link to Modeling") ω^gf^​a​=Dg−1​Rig​ωi+bgg​=Df−1​fi+bfa​​ ### Scale, Shear, Rotation, and G-sensitivity[​](#scale-shear-rotation-and-g-sensitivity "Direct link to Scale, Shear, Rotation, and G-sensitivity") MetriCal keyword: `scale-shear-rotation-g-sensitivity` This model corrects any scale factor errors, non-orthogonality errors and rotational misalignment errors in the accelerometer or gyroscope measurements. Additionally, this model corrects for a phenomenon known as *G-sensitivity*. G-sensitivity is a property of an oscillating gyroscope that causes a gyroscope to register a bias on its measurements when the gyroscope experiences a specific-force. | Parameter | Mathematical Notation | Description | | -------------------------- | --------------------- | --------------------------------------------------------------------------- | | `gyro_scale` | \[sgx​,sgy​,sgz​] | Scaling applied to each component of the gyroscope measurement | | `specific_force_scale` | \[sfx​,sfy​,sfz​] | Scaling applied to each component of the accelerometer measurement | | `gyro_shear` | \[σgxy​,σgxz​,σgyz​] | Non-orthogonality compensation applied to the gyroscope measurement | | `accelerometer_shear` | \[σfxy​,σfxz​,σfyz​] | Non-orthogonality compensation applied to the accelerometer measurement | | `accel_from_gyro_rot` | Rgi​ | The rotation from the gyroscope frame to the IMU's frame | | `g_sensitivity` | \[γx​,γy​,γz​] | The in-axis G-sensitivity of the gyroscope induced by the specific-force | | `g_sensitivity_cross_axis` | \[γxy​,γxz​,γyz​] | The cross-axis G-sensitivity of the gyroscope induced by the specific-force | The intrinsic correction matrices are formed with the two equations Dg​Df​​=​sgx​00​σgxy​sgy​0​σgxz​σgyz​sgz​​​=​sfx​00​σfxy​sfy​0​σfxz​σfyz​sfz​​​​ and the G-sensitivity matrix is given by T​=​γx​00​γxy​γy​0​γxz​γyz​γz​​​​ #### Correction[​](#correction-3 "Direct link to Correction") ωifi​=Rgi​Dg​(ω^g−Tfi−bgg​)=Df​(f^​a−bfa​)​ Correction Order By adding G-sensitivity, the gyroscope correction becomes dependent upon the specific-force correction. As such, it is necessary to first correct the accelerometer measurement and then use the corrected specific-force to correct the gyroscope measurement. #### Modeling[​](#modeling-3 "Direct link to Modeling") ω^gf^​a​=Dg−1​Rig​ωi+Tfi+bgg​=Df−1​fi+bfa​​ --- # LiDAR Models Below are all supported LiDAR intrinsics models in MetriCal. If there is a model that you use that is not listed here, just [contact us](/metrical/support_and_admin/contact.md)! We're always looking to expand our support. ## Lidar's only model: Lidar[​](#lidars-only-model-lidar "Direct link to Lidar's only model: Lidar") MetriCal keyword: `lidar` For all intents and purposes, LiDAR intrinsics are usually reliable from the factory. MetriCal currently only supports extrinsics calibration for lidar, making this our simplest model. It assumes there are no range, azimuth, or altitude offsets for any of the beams. This is the default model for all LiDARs. --- # Local Navigation System Models Below are all supported LNS intrinsics models in MetriCal. If there is a model that you use that is not listed here, just [contact us](/metrical/support_and_admin/contact.md)! We're always looking to expand our support. ## LNS's only model: LNS[​](#lnss-only-model-lns "Direct link to LNS's only model: LNS") MetriCal keyword: `lns` LNS can only be represented by an odometry data type; by definition, odometry is purely extrinsic. Therefore, there's only one thing to do here: line up those positions with other sensors. --- # Releases + Changelogs ## Version 15.0.1 - October 29, 2025[​](#version-1501---october-29-2025 "Direct link to Version 15.0.1 - October 29, 2025") This patch update addresses a few incongruities within CLI arguments and help comments in MetriCal proper, making it 1:1 with the documentation you read here. We've also snuck in a few optimization improvements to speed things up. ## Version 15.0.0 - October 23, 2025[​](#version-1500---october-23-2025 "Direct link to Version 15.0.0 - October 23, 2025") ### Overview[​](#overview "Direct link to Overview") This release represents a significant upgrade to MetriCal's abilities in handling complex production environments, with a focus on improving calibration workflows and handling complicated multi-step calibrations. All in all, this release represents a significant step forward in MetriCal's continued maturation as a production-grade calibration tool. ### Migration Guide[​](#migration-guide "Direct link to Migration Guide") #### Introducing Manifests[​](#introducing-manifests "Direct link to Introducing Manifests") Pipeline Replacement If you're wondering where [pipeline mode](/metrical/14.1/commands/pipeline) went, well, here's your answer. Most notably, 15.0.0 introduces the concept of [*Manifests*](/metrical/commands/manifest/manifest_overview.md), which describe a complete calibration procedure from beginning to end. It's our sincere hope that manifests will make it easier for new users to understand how to use MetriCal, while also providing experienced users with a powerful tool to codify and share their calibration processes. We recommend all users migrate to using manifests for their calibration workflows. #### MCAPs as Output[​](#mcaps-as-output "Direct link to MCAPs as Output") Results MCAPs Read more about how to interact with MCAP result files in the [Results Output docs](/metrical/results/output_file.md). MetriCal now no longer will output results or detections as schema-based JSON. Instead, MetriCal writes everything out as MCAP to provide better backwards compatibility in future versions. If you'd like to learn more about this change, check out our blog post written about the move [here](https://www.tangramvision.com/blog/moving-metrical-metrics-to-mcaps). *NOTE*: Old `results.json` files are not expected to work with this release at all. However, the plex and object space files within those results files are *still valid* and can be used as-is. Results MCAP files incorporate all the data used to generate report information during the execution of MetriCal: inputs (plex, object-space, etc), outputs (plex, object-space), and more. The protobuf-based schemata for message types can be [found here](https://gitlab.com/tangram-vision/oss/tangram-protobuf-messages). Results MCAP files are now universally output every calibration run regardless of success status; however, failed calibrations will output an abbreviated results file without any of the pre-cal, residual, or summary metrics from the optimization. Similarly, optimized plex and object-spaces are not attached to the MCAP file in these states. This is meant so that users can directly provide these results files to Tangram for debugging purposes as they contain all the necessary inputs (sans the dataset itself) that can help make reproducing errors easier when debugging. All modes that took a results.json previously (e.g. Display mode) now take a results MCAP file instead. #### Credit-Based Licensing[​](#credit-based-licensing "Direct link to Credit-Based Licensing") MetriCal is shifting to a credits-based licensing system in order to lower the barrier to entry for new and curious users. Each calibration run will consume a certain number of credits based on the complexity and resources used. The base cost for a single modality calibration (e.g., camera-only) is 3 credits, with one additional credit required for each extra modality. If you are an existing MetriCal user under an annual contract, we will be converting your current license to the equivalent number of credits per month. Users under monthly subscription will be receiving communication from us on the transition. #### Object Space Mutual Construction Groups[​](#object-space-mutual-construction-groups "Direct link to Object Space Mutual Construction Groups") Example Updated Object Space Download an example updated object space for multi-modality targets in the [Combining Target Modalities](/metrical/targets/combining_modalities.md) documentation. The adoption of the [Consolidate command](/metrical/commands/calibration/consolidate.md) into the mainline workflow has necessitated a breaking change to how mutual construction groups are specified in object spaces. All mutual construction groups now specify the geometric transform between target origins within their definition: ``` "mutual_construction_groups": [ { "24e6df7b-b756-4b9c-a719-660d45d796bf": "parent", "34e6df7b-b756-4b9c-a719-660d45d796bf": { "parent_from_object": { "rotation": [ // As quaternion 0, 0, 0, 1 ], "translation": [ 0.30, 0.40, 0 // As (x, y, z) in meters ] } } } ], ``` All camera-lidar mutual construction groups must be updated to match this new format. Note that previously, camera-lidar targets held their geometric relation in the [`x_offset` and `y_offset`](/metrical/14.1/targets/target_overview?target-type=circle#lidar-circle-target) fields of the Circle target: ``` "circle":{ "radius": 0.60, "x_offset": 0.30, "y_offset": 0.40, }, ``` Just move this information into the mutual construction group as shown above, and you should be good to go. #### No ROS 1 Bag Support[​](#no-ros-1-bag-support "Direct link to No ROS 1 Bag Support") ROS 1 bag support has been removed entirely. If you are using a ROS1 bag, please convert it to MCAP using the [mcap CLI tool](https://mcap.dev/guides/getting-started/ros-1#convert-to-mcap). #### Coordinate Basis Handling Changes[​](#coordinate-basis-handling-changes "Direct link to Coordinate Basis Handling Changes") Bases are no longer input into MetriCal via the CLI. Don't worry! We'll bring it back. We're just laying the groundwork for a more robust basis handling system that will be implemented in the future, one that will go hand-in-hand with both the Init and Calibrate commands. #### Init Plex References (Seeding Plexes)[​](#init-plex-references-seeding-plexes "Direct link to Init Plex References (Seeding Plexes)") Init behavior has changed considerably when it comes to how seed plexes are handled. First off, the terminology within the docs and CLI has changed from "seed plex" to "Plex References", to better reflect the fact that these files may or may not be plexes, and may or may not have components and constraints that are applied to the final initialized plex. Furthermore, Plex References are now applied in the order that they are provided to the CLI. This means that a series of Plex References that all have overlapping components will have the last one take precedence. You can read more about this behavior in the [Init command docs](/metrical/commands/calibration/init.md). #### Argument Modifications[​](#argument-modifications "Direct link to Argument Modifications") Using manifests also exposed a number of incongruities and rough edges in existing commands, which have all been addressed in this release. This means, in turn, that many commands have changed in small (or not-so-small) ways. Please read through the changelog carefully to ensure that your existing workflows are not disrupted. Commands: | Old command | New Command | Description | | --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ | | `pipeline` | [`new`](/metrical/commands/manifest/new.md), [`run`](/metrical/commands/manifest/run.md), [`validate`](/metrical/commands/manifest/validate.md) | Replaced pipeline command with manifest commands | | `consolidate-object-spaces` | [`consolidate`](/metrical/commands/calibration/consolidate.md) | The old command name is still preserved as alias | Global Arguments: | Old argument | New argument | Description | | ------------------------------ | ------------ | ----------- | | `--topic-to-observation-basis` | N/A | Removed | | `--topic-to-component-basis` | N/A | Removed | Init Command: | Old argument | New argument | Description | | ------------------------ | ----------------------------- | ------------------------------------------------ | | `--remap-seed-component` | `--remap-reference-component` | Old argument still preserved as alias | | `--overwrite-plex` | `--overwrite-strategy` | Strategy is now explicit | | `--seed-cal-path` | `--reference-source` | Old argument still preserved as alias | | N/A | `--uuid-strategy` | A UUID strategy must now be specified explicitly | Calibrate Command: | Old argument | New argument | Description | | ----------------------------------- | ------------------------------------------- | --------------------------------------------------- | | `--topic-to-component` | N/A | Remapping is only performed in the Init command now | | `--camera-motion-threshold [value]` | `--camera-motion-threshold [profile/value]` | Camera motion threshold has convenient profiles | | `--lidar-motion-threshold [value]` | `--lidar-motion-threshold [profile/value]` | Lidar motion threshold has convenient profiles | Display Command: | Old argument | New argument | Description | | ----------------------- | -------------------------- | ------------------------------------ | | `display [DATA] [PLEX]` | `display [DATA] -p [PLEX]` | The plex is now an optional argument | Report Command: | Old argument | New argument | Description | | ------------- | ------------ | --------------------------------------------------------------------- | | `--origin` | N/A | Removed; Report mode no longer supports printing specific constraints | | `--secondary` | N/A | Removed; Report mode no longer supports printing specific constraints | ### Changelog[​](#changelog "Direct link to Changelog") #### Added[​](#added "Direct link to Added") * Format deserialization for JSON and TOML types will now point you to the location in the text where an error was encountered * Improved error handling due to mis-ordering path-like arguments to the various MetriCal modes. Now MetriCal will not only report format-specific errors but will likewise avoid reading files into memory in cases where users pass in e.g. a large MCAP where a plex JSON file is expected. * Support for Tangram's protobuf-based IMU sample message type * Manifests are now available for MetriCal. A manifest describes an entire MetriCal workflow. Manifests feature variable substitution and templating to allow users to easily create multiple similar workflows with minimal effort. Manifests only expose certain modes, namely Init, Calibrate, Shape, and Consolidate. This command is intended to replace the deprecated Pipeline command. * Validate command to validate manifests and other input files. * New command to output a manifest template for users to fill in. * IMU and LNS motion is now analyzed ahead of the calibration. If motion is considered deficient for calibration, a diagnostic is raised and the user is warned. * All outputs for every command in the manifest now have default output paths. Users may override these paths as needed. #### Changed[​](#changed "Direct link to Changed") * IMU preintegration RMSE tables have been removed from extrinsics info section of the report. These have been replaced by an IMU summary table in the summary section. * Split IO error variants between filesystem related IO errors and format specific errors * Tables are now printed using UTF-8 borders and ANSI styles * Object space configurations now require a transform to be specified for each element in a mutual construction group * Consolidate object space mode will now construct a new object-space with the appropriate mutual construction groups. This will enable calibration optimizations that reduces the overall number of intermediate or nuisance parameters using a pre-surveyed object-space. * Overwriting the plex is now an explicit act in Init CLI. Users must choose to use the previous plex as a seed. * Sync group IDs are now set / assigned on measurements, rather than on raw observations. * Overwrite and UUID update handling in Init CLI has been rewritten for clarity. #### Fixed[​](#fixed "Direct link to Fixed") * Fixed several issues in how the stereo rectification table is generated that would result in poor performance in constructing the stereo rectification histograms. * Small bug in calculation for component timestamp resolution. * Local Navigation System init errors are now properly reported. * Fixed a bug where units were converted for angular SSEs in the Extrinsics Info section and IMU preintegration metrics (now a summary metric) prior to dividing by the trace. * Fixed inconsistency in how sub-headings in the calibrated plex section were displayed * Fixed an issue where a data diagnostic for multiple spatial subplexes would never be displayed to the user * Report mode now faithfully reproduces the exact report info from calibrate mode * Fixed a false-positive where data diagnostics would not be triggered because the data diagnostic derived from stereo reprojection metrics were counted on a per-point basis instead of a per-image basis * Fixed a number of false-negative data diagnostic triggers that would occur due to one of: 1. Requiring necessary component data across modalities 2. Missing data for a given modality 3. Camera feature coverage being incorrectly computed 4. Required necessary components across modalities existed, but were part of a different spatial subgroup in the plex * Textplots would crash MetriCal if there were no detections to plot in the timeline, since the minimal height for a chart was not met. The timeline chart has been revised completely for better readability and to avoid this crash. * Single-topic runs no longer return a sync data diagnostic. * A bug in the MCAP checks where a small file (less than mcap::MAGIC number of bytes) would panic due to being larger than the mmap'd slice's length. * Camera and lidar motion filter settings in the Calibrate CLI must be set by the user explicitly now. The defaults have been removed, in favor of sane defaults for different scenarios. Users may still set the thresholds manually using the `--camera-motion-threshold` and `--lidar-motion-threshold` flags. * Included an import for Google's Cascadia Mono font to our report HTML to ensure consistent font rendering of our table output * Fixed bugs in the dot-marker / CirUco detector that would prevent boards at \~45° relative to a camera from being detected at certain distances / distortions. * Bug in the logic for printing the extrinsics info table. * Final init plex table now correctly prints the final UUIDs of each component. * Fixed a bug that would report incorrect values for motion / quality filter results in DI-003 when all detections were filtered. * Motion filtering diagnostics have been corrected. This also means charts and diagnostics that rely on motion filter results have been corrected as well. * Help messages for many CLI arguments have been fixed for clarity and correctness. * Covariance propagation for inverted spatial constraints. * Fixed construction of CRE costs in the case of consolidated objects #### Removed[​](#removed "Direct link to Removed") * Removes "unlicensed mode". Errors that would previously result in falling back to unlicensed mode will instead exit with an error. Unlicensed mode was previously added to provide an easy and inexpensive way to try out MetriCal. A pay-as-you-go option will soon provide the easy and inexpensive way to try out MetriCal instead. * The topic-to-component mapping argument in Calibrate mode has been removed. Users may remap components in Init mode instead. * ROS 1 bag support has been removed entirely. If you are using a ROS1 bag, please convert it to MCAP using the [mcap CLI tool](https://mcap.dev/guides/getting-started/ros-1#convert-to-mcap). * Bases are no longer input into MetriCal via the CLI. This lays the groundwork for a more robust basis handling system that will be implemented in the future. ## Version 14.1.2 - July 29, 2025[​](#version-1412---july-29-2025 "Direct link to Version 14.1.2 - July 29, 2025") ### Changelog[​](#changelog-1 "Direct link to Changelog") #### Fixed[​](#fixed-1 "Direct link to Fixed") * Fixed a bug in H264 decoding that would cause a panic in .deb releases. ## Version 14.1.1 - July 28, 2025[​](#version-1411---july-28-2025 "Direct link to Version 14.1.1 - July 28, 2025") ### Changelog[​](#changelog-2 "Direct link to Changelog") #### Changed[​](#changed-1 "Direct link to Changed") * Calibrations no longer fail when/if a camera initialization has stalled. ## Version 14.1.0 - July 25, 2025[​](#version-1410---july-25-2025 "Direct link to Version 14.1.0 - July 25, 2025") ### Overview[​](#overview-1 "Direct link to Overview") Introducing [Local Navigation Systems](/metrical/calibration_models/local_navigation.md), or LNS, a new calibration modality. This aligns an odometry source with the rest of your sensor stack, making it ideal for sensor-to-chassis calibration. LNS is particularly useful for applications like autonomous vehicles, where precise alignment of sensors to the vehicle's navigation system is crucial. Follow along with an LNS calibration in its [calibration guide](/metrical/calibration_guides/local_navigation_cal.md). We've also shifted around some terminology: what were "semantic constraints" are now called [Formations](/metrical/core_concepts/constraints.md#formations). Note this change will affect schemas, but plex configurations can use either terminology. Reports should also be cleaner and more readable, with less visual clutter. If you don't enjoy pretty colors or progress bars (who does?), you can now disable them with the [`--progress-indicators`](/metrical/commands/global_arguments.md#--progress-indicators-when) and [`--color`](/metrical/commands/global_arguments.md#--color-when) options. Oh, one more change to call out here: our AprilGrid detector got *way* better. We hope you notice the difference! ### Changelog[​](#changelog-3 "Direct link to Changelog") #### Added[​](#added-1 "Direct link to Added") * Local Navigation System (LNS) support in the calibration process * Added `--progress-indicators` and `--color` flags which can be set to `auto`/`enabled`/`disabled`. Allows users to reduce noise in the console output / when logging to a file. `auto` mode will try to intelligently detect when logging to a terminal and only enable these modes when that's the case. * Use robust cost function in cam init process to handle poorly-behaved poses. * Use `formations` as replacement for `semantic constraints` in the calibration process. This is a breaking change for schemas, so users should update their schemas accordingly. * Adopt updated filtering logic for data intake * MetriCal now utilizes a more restricted heuristic for determining prior covariance in init mode. This now results in cx and cy having more comparable prior variances, as well as focal length having a much more reasonable (albeit still very large) initial prior variance. #### Changed[​](#changed-2 "Direct link to Changed") * Change `NoSharedSyncGroups` diagnostic from Error to Warning * Streamlined the output format to reduce the amount of visual clutter. Further, color mappings have changed slightly for different message types. #### Fixed[​](#fixed-2 "Direct link to Fixed") * Solver failures on camera init are now reported as errors, rather than just unwrapping the call without checking * Modify the report output depending on the detection of projective compensation * Fixed a bug with dot-marker / ellipse object-spaces where detections were not deterministic when applied to similar images. Users that use dot-marker object-spaces should now expect that running a dataset with the `-y` flag (without cached detections) should get the same behaviour and results every time MetriCal is run. * Fixed a bug that would cause significantly reduced detections in certain scenarios when using an object space with multiple different target types. * Redundant still observations are once again accepted in the adjustment * Correct topic filtering behavior when requested topic cannot be read * Correct counts for quality and motion filtering for non-camera/lidar observations * Various improvements to the AprilGrid and Dot Marker detectors #### Removed[​](#removed-1 "Direct link to Removed") * The data diagnostic for projective compensation via high distortion parameter variance * LowMutualObservationsCountError has been removed entirely from diagnostics ## Version 14.0.1 - May 21st, 2025[​](#version-1401---may-21st-2025 "Direct link to Version 14.0.1 - May 21st, 2025") ### Overview[​](#overview-2 "Direct link to Overview") This release fixes a number of bugs that cropped up in 14.0.0. ### Changelog[​](#changelog-4 "Direct link to Changelog") #### Fixed[​](#fixed-3 "Direct link to Fixed") * Fixed a bug in our aprilgrid detector to better support interior detections on Camera ←→ LiDAR boards. * Fixed a bug where extrinsics would not be generated in 1 Camera ←→ 1 LiDAR datasets. * Fixed a bug where extrinsics would not be generated in 1 Camera ←→ 1 IMU datasets. ## Version 14.0.0 - May 12th, 2025[​](#version-1400---may-12th-2025 "Direct link to Version 14.0.0 - May 12th, 2025") ### Overview[​](#overview-3 "Direct link to Overview") This release updates licensing to support different subscription tiers (R\&D, Growth, Enterprise), introduces unlicensed calibration, and includes significant refactoring of internals. The unlicensed calibration mode allows users to test and refine their data capture process without needing to purchase a license. Several deprecated features have been removed, and various error code mappings have been updated for better clarity and consistency. ### Changelog[​](#changelog-5 "Direct link to Changelog") #### Added[​](#added-2 "Direct link to Added") * Support for Tangram's new subscription tiers and limits. * Unlicensed calibration mode, allowing users to test and refine their data capture process without purchasing a license. This mode outputs all metrics and diagnostics as usual but hides the final calibration results. * Association metadata is now written to the `results.json` file and required by default on deserialization (breaking change for previous results files). #### Changed[​](#changed-3 "Direct link to Changed") * The `no_offset` model variant in the [Init](/metrical/commands/calibration/init.md) command has been renamed to `lidar`. * Mutual observation count between stereo pairs (and the corresponding data diagnostic) now considers all unique observations within a similar sync group, rather than explicit pair-wise overlap of individual targets in the greater object-space. * Consolidated exit code handling for better error reporting. #### Removed[​](#removed-2 "Direct link to Removed") * The IMU `no_intrinsics` model has been removed from [Init](/metrical/commands/calibration/init.md) and is no longer supported by MetriCal. * Exit code 11 has been deprecated; MetriCal will now correctly report exit code 1 on IO related errors when using the [Consolidate Object Space](/metrical/commands/calibration/consolidate.md) command. * Various deprecated features and arguments including evaluate mode, `--topic-to-component`, and preset devices have been removed. #### Deprecated[​](#deprecated "Direct link to Deprecated") Version 1 license keys (prefixed with `key/`) have been **deprecated and will no longer work after November 1st, 2025**. If you use license keys prefixed with `key/`, please create new license keys (which will be version 2 keys, prefixed with `key2/`) and use them instead. ### Technical Notes[​](#technical-notes "Direct link to Technical Notes") This version makes several breaking changes to the internal architecture of MetriCal, particularly around the handling of licensing and model variants. The new licensing tier system allows for more flexible deployment options tailored to different user needs. The removal of the `evaluate` command represents a significant change to the workflow, but aligns with our focus on providing more accurate and reliable calibration measurements. Users who previously relied on this command should contact Tangram support for guidance on alternative approaches. The mutual observation count improvement will lead to more accurate stereo pair diagnostics, especially in complex multi-camera systems where object-space observations may not perfectly overlap between cameras but still occur within similar sync groups. Note that this release introduces a breaking change for results files generated by previous versions, as the new association metadata is now required during deserialization. ## Version 13.2.1 - April 9th, 2025[​](#version-1321---april-9th-2025 "Direct link to Version 13.2.1 - April 9th, 2025") ### Fixed[​](#fixed-4 "Direct link to Fixed") * Images measurements that don't have enough data to derive a pose from are now filtered out of the optimization, rather than being assigned a "default" pose. ## Version 13.2.0 - April 3rd, 2025[​](#version-1320---april-3rd-2025 "Direct link to Version 13.2.0 - April 3rd, 2025") ### Overview[​](#overview-4 "Direct link to Overview") This release introduces the Power Law camera model, improves camera initialization, and adds new diagnostics to help users identify and fix issues with their calibration data. The Power Law model is particularly effective for cameras with significant distortion, and the new data diagnostics will guide users through common problems during calibration data collection. ### Changelog[​](#changelog-6 "Direct link to Changelog") #### Added[​](#added-3 "Direct link to Added") * New [Power Law](/metrical/calibration_models/cameras.md) camera model, effective for cameras with significant distortion. * Comprehensive data diagnostics to help identify issues in calibration data collection. * New chart: "Observed Camera Range of Motion" to help visualize and diagnose potential projective compensation effects. * Ability to consolidate multiple object spaces using object relative extrinsics with the new `consolidate-object-spaces` mode. #### Changed[​](#changed-4 "Direct link to Changed") * Enhanced camera initialization routine for better handling of heavily distorted images. * Processed Observation Count table now distinguishes between filtering from quality vs. motion. * Console output reorganized into a more readable report format. * Improved error messages for data integrity issues. * All charts now have clear titles for easier reference. * Extrinsics tables are now shown in alphabetical order by component name. #### Fixed[​](#fixed-5 "Direct link to Fixed") * Fixed the rectification tables in console output. * Tuned correction function for OpenCV Fisheye model. * Modified Double Sphere model jacobians for better accuracy. * Fixed timestamp casting error in timeline chart. #### Removed[​](#removed-3 "Direct link to Removed") * Evaluate mode is no more. A better implementation is planned for a future release of MetriCal. ### Technical Notes[​](#technical-notes-1 "Direct link to Technical Notes") This version provides a significant enhancement to diagnostic capabilities, with particular focus on helping users understand why a calibration might be failing. The new data diagnostics system checks for common issues such as: * Insufficient camera movement range * Poor feature coverage across camera FOV * Too many observations filtered by quality or motion * Missing component dependencies required for calibration * Insufficient mutual observations between camera pairs Each diagnostic comes with a detailed explanation and suggestions for improvement, referencing specific charts in the report for additional context. ## Version 13.1.0 - February 25th, 2025[​](#version-1310---february-25th-2025 "Direct link to Version 13.1.0 - February 25th, 2025") ### Overview[​](#overview-5 "Direct link to Overview") This update changes the way MetriCal reports errors and fixes various bugs with init mode. In addition, we have made the decision to deprecate compatibility with both folder and ROS 1 bag datasets. This decision was made because we want to standardize our data ingestion code around MCAP and remove the need to MetriCal to handle idiosyncrasies with the other two input formats. These data formats are supported in 13.1.0, but will be removed in a future release. For ROS1 bags, it is very simple to use the [mcap CLI tool](https://mcap.dev/guides/getting-started/ros-1#convert-to-mcap) to convert them to an MCAP. We had recommended this even before the deprecation was announced, because it greatly improves dataset processing performance on our end. For folder datasets, we will ensure that a suitable conversion tool exists before fully removing support for them in MetriCal. ### Changelog[​](#changelog-7 "Direct link to Changelog") * Added deprecation warnings for folder and ROS 1 bag datasets. * Changed errors to have better messaging and updated errors docs to match. Some exit codes are no longer returned in practice (although their exit codes are not reused). * Foxglove CompressedVideo messages have been added to MCAP schema map * Fix rectification tables in report output * Fix basis transformations in MetriCal * Fixed how init mode consumes URDFs. In particular, init mode now has a resolving strategy that prioritizes plexes first (based on their creation timestamp, newest plex first / oldest plex last), followed by URDFs being applied in order to seed extrinsics if a better extrinsic or spatial constraint is not first provided in an earlier plex or URDF. * Fixed a bug where topic mappings were not applied to seed plexes that were being overwritten (i.e. when the `--overwrite` or `-y` flags were passed to the CLI). * Fixed a bug in how plex creation timestamps were not being preserved when overwriting topic mappings in init mode. * Fixed a bug where no logging would occur if a pipeline file failed to parse ## Version 13.0.0 - January 14th, 2025[​](#version-1300---january-14th-2025 "Direct link to Version 13.0.0 - January 14th, 2025") ### Overview[​](#overview-6 "Direct link to Overview") The major feature of this release is a big speedup (2-3x faster in testing) when processing H.264 datasets. Additionally, this release introduces some breaking changes to the `Markers` fiducial type (now named `SquareMarkers`) and minor ones to the format of some metrics in `results.json`. Both of these changes are fairly niche and we do not expect them to have an impact on the vast majority of users. ### Changelog[​](#changelog-8 "Direct link to Changelog") * **(Breaking Change)** The `Markers` fiducial type has been renamed to `SquareMarkers`. Please update your object space file if you are using this (uncommon) fiducial type. * **(Breaking Change)** Added `marker_ids` field to `SquareMarkers` and removed `marker_length`. * **(Breaking Change)** The `results.json` file no longer contains a `metrics.optimized_object_space` field. Instead users should start using the `object_space` field at the top-level of the results as that now contains the inferred object space. This will not affect users who aren't already doing custom processing of the `results.json` file. * Greatly improve performance when ingesting H.264 datasets ## Version 12.2.0 - December 13th, 2024[​](#version-1220---december-13th-2024 "Direct link to Version 12.2.0 - December 13th, 2024") ### Overview[​](#overview-7 "Direct link to Overview") This release has some nice improvements to camera < - > lidar (especially in multi-target scenarios) as well as greatly improved AprilGrid detection quality. In addition, this release includes some visualization and logging quality of life improvements. ### Changelog[​](#changelog-9 "Direct link to Changelog") #### Added[​](#added-4 "Direct link to Added") * Added an optional `reflective_tape_width` field to the circle board object space format. This is used as a hint to the detector when identifying retroreflective circles in point clouds. #### Changed[​](#changed-5 "Direct link to Changed") * Improved detection quality of multiple circle boards in the same environment. * Made various improvements to the underlying optimization calculations. * Upgraded to Rerun v0.19. * Made various improvements to visualization of lidar and image detections. #### Fixed[​](#fixed-6 "Direct link to Fixed") * Greatly improved detection quality on Kalibr-style AprilGrid targets. * Fixed an issue with detecting image features with only a single adjacent tag. * Addressed some small timestamp-related visualization bugs. #### Removed[​](#removed-4 "Direct link to Removed") * Evaluate mode has been hidden as a command, due to pending improvements. The mechanism that is currently used to perform the metric generation during evaluate mode runs an optimization, which can lead to some hard-to-interpret results. In particular, errors may projectively compensate into the object-space which at present does not have any user-visible metrics readily available outside of the rerun visualization. For more information on this change, please see the ["Evaluate" mode documentation](/metrical/12.2/modes/evaluate). ## Version 12.1.0 - October 23rd, 2024[​](#version-1210---october-23rd-2024 "Direct link to Version 12.1.0 - October 23rd, 2024") ### Overview[​](#overview-8 "Direct link to Overview") This is a small update. The main user-facing change is that MetriCal can now handle markerboards generated by newer versions of OpenCV. OpenCV introduced a silent breaking change to its markerboard generation in version 4.6.0, which can cause problems with MetriCal's detection of certain newer boards under some circumstances. For more information, please reference [the initial\_corner field in the markerboard docs.](/metrical/targets/target_overview.md) ### Changelog[​](#changelog-10 "Direct link to Changelog") ### Changed[​](#changed-6 "Direct link to Changed") * Update internal OpenCV version from 4.5.4 to 4.10.0 ### Added[​](#added-5 "Direct link to Added") * Add support for both OpenCV "new" style markerboards. ## Version 12.0.0[​](#version-1200 "Direct link to Version 12.0.0") Release Page MetriCal Sensor Calibration Utilities repository for v12.0.0: ### Overview[​](#overview-9 "Direct link to Overview") This release is probably one of our largest ever. There are new modes, new mathematics, and more expressivity. And as with every major version bump, we've also made some changes to the CLI arguments. Most old arguments that have changed have been deprecated, not removed, so you can still use your scripts from v11.0 (for the most part). You'll just get loud warnings about switching over before v13 comes around... ### Change Your MetriCal Alias\![​](#change-your-metrical-alias "Direct link to Change Your MetriCal Alias!") First and foremost: If you're a current user, we suggest adding the `--tty` flag to your metrical bash alias. See an example in the [setup docs](/metrical/configuration/installation.md). This allows MetriCal to render progress bars in the terminal, which makes it a much nicer experience when processing large datasets. Otherwise, you'll just see a blank screen for a while. ### UI + UX Highlights[​](#ui--ux-highlights "Direct link to UI + UX Highlights") #### Offline Licensing[​](#offline-licensing "Direct link to Offline Licensing") MetriCal now offers users the option to cache licenses for offline use. If you're running MetriCal "in the field" (aka away from a modem), this is the feature for you! Note that offline licenses are only valid for a week before MetriCal needs to ping a Tangram server. Visit [the licensing docs](/metrical/configuration/license_usage.md) for setup and details. #### Descriptive Error Messages[​](#descriptive-error-messages "Direct link to Descriptive Error Messages") We've reworked every. single. error case in MetriCal to have a descriptive error message. Instead of getting something generic, you'll now see the error, the error's cause, and a helpful suggestion on how to fix it. Many of these suggestions link directly to the documentation, so you don't have to search around for the "right answer" anymore. Miette Is Great For those of you developing your own Rust programs, we couldn't recommend [miette](https://docs.rs/miette/latest/miette/) enough. #### Cached Detections[​](#cached-detections "Direct link to Cached Detections") Tired of re-running detections when running a new calibration? We've got you covered. MetriCal now caches its detections from the initial dataset processing. This means you can change models and test different configurations in a fraction of the processing time. There are two ways MetriCal finds detections: * Passed to the CLI via the $DATA required argument. For instance, instead of passing an MCAP, one could just pass that MCAP's cached detections JSON. * Automatically found in the same directory as the dataset. Cached detections are written to the same directory as the dataset, named with a `.detections.json` extension to the dataset name. For example, `/dataset/example.mcap` would have a cached detections file named `/dataset/example.detections.json`. If MetriCal finds this file, it will use it instead of re-processing the entire dataset. Potential Naming Conflicts This also means that datasets with the same name will produce detection caches with the same name. We don't advise naming all of your files the same thing anyway, but... you do you. If you have cached detections, but really want to re-process them anyway, just pass the `--overwrite`/`-y` flag to Calibrate or Evaluate mode. #### Multiple Seed Plex in Init Mode[​](#multiple-seed-plex-in-init-mode "Direct link to Multiple Seed Plex in Init Mode") You can now pass multiple seed plex to Init mode. This is useful in a "big rig" scenario, when it's just not feasible to collect every sensor's data in one run. For example, one might run data from 4 cameras individually, then combine them all into one system to gather extrinsics via Init mode. When processing multiple seed plex, the newest plex is given priority, being processed for new information from newest to oldest. If there is an existing Init'd plex for the dataset, that plex is also used as a seed. This makes it even easier to compare and contrast different calibrations. #### Display Mode[​](#display-mode "Direct link to Display Mode") Version 12.0 also introduces Display mode, which allows you to visualize the applied calibration to any dataset. This takes the place of the `--render` flag in Calibrate and Evaluate modes, which now just renders the detections used for calibration. Important: Display mode expects a running instance of Rerun v0.18. ### Changelog[​](#changelog-11 "Direct link to Changelog") #### Added[​](#added-6 "Direct link to Added") * CLI: * Change the logging level more easily with verbose flags: `-v` (Debug), `-vv` (Trace), `-q` (Warn), or `-qq` (Error). * Multi-progress bar to estimate observation read-in time. Make sure to add `-t` to a Docker alias to use this functionality! * Offline licensing. See [the licensing docs](/metrical/configuration/license_usage.md) for setup and details. * An `OptimizationProfile` argument to Calibrate mode that allows users to tune the parameters of the adjustment e.g. relative error threshold, absolute error threshold, and max iterations. * A binned outlier count is now reported in the camera binned reprojection table. * Display mode to visualize the output of a calibration. * A new sub-mode, `metrical shape tabular` which can convert a plex into a simplified, stable, tabular format that holds a set of intrinsics (and direct artefacts of the intrinsics, such as look-up tables) alongside extrinsics. The tabular format from this mode can be exported as either JSON or MsgPack binary (to compress the total filesize, as LUTs can be quite large). * Tables for Component Relative Extrinsics, as well as Preintegrated IMU errors. This should give you a much better idea of extrinsics quality beyond the abstract covariance values for spatial constraints. * Data I/O: * Expanded list of encodings for YUYV image types in ROS - `uyvy`, `UYVY`, `yuv422`, `yuyv`, `YUYV`, and `yuv422_yuy2` are all supported. * That's gotta be all of them, right? Right? Let's all come together to make life easier for your old Tangram pals. * H264 message types for ROS1 (from [this project](https://github.com/orchidproject/x264_image_transport)) and MCAP (from [Foxglove's CompressedVideo](https://docs.foxglove.dev/docs/visualization/message-schemas/compressed-video/)). * Added support to `CompressedImage` types for the alternate spelling for "[jpg](https://www.youtube.com/watch?v=jmaUIyvy8E8\&t=0s)". * Detection and reweighting of Paired 3D Point outliers. #### Changed[​](#changed-7 "Direct link to Changed") * Algorithm Changes: * Use new M-estimators, and remove all explicit outlier logic. No more outlier flags! * Paired Plane Normals between lidar-lidar pairs are no longer used * The motion filter has been rewritten to include both images and lidar detections. Users can manipulate the motion detector thresholds by using the `--camera-motion-threshold` and `--lidar-motion-threshold` flags, or just turn it off with `--disable-motion-filter`. * The motion filter now runs *after* all detections have been processed. This means that you can manipulate the motion filter over cached detections, too. * CLI: * `UmbraOutput` is now `CalibrationOutput`. * All error codes have been changed in favor of simpler error code and more descriptive fixes. * Look Up Tables generated via the `metrical shape lut` command are now generated using the `image_luts` module from the applications crate. This changes the final schema for look up tables but makes them more directly compatible with OpenCV and other software that expects both row and column remappings to be separated. * Init command: * Init mode can now take multiple plex as seeds. * If a previously generated init plex is found at the same location as the plex-to-be-written, then the previous plex is used as a seed for the new plex. * Change of basis is respected and applied in Init mode when seeding the plex with an input plex. * Rendering: * Observations and detections are rendered in the same entity in the visualization. * When rendering more than one point cloud in Display mode, each component's point clouds are uniformily colored rather than colored by the point's intensity value. This allows for easier comparison between components. * Calibrate mode no longer renders the results of a calibration, only the observations and detections. Results are applied in Display mode. #### Fixed[​](#fixed-7 "Direct link to Fixed") * CLI: * Component names with "/" no longer cause file I/O issues with the `shape focus` command * Detectors: * The circle detector for point clouds and the feature detector for images are now much more robust to misdetections and noise. * Init command: * Init mode now correctly handles all (spec'd) cases in which a seed plex as the basis for a new plex. * Topics that are not usable by MetriCal for the purposes of calibration are now filtered out. * If there are no usable topics, Init mode will list the usable topics in a dataset for the user. * There was a (self-induced) memory pressure build-up during lidar detection that has been remedied. #### Removed[​](#removed-5 "Direct link to Removed") * Init command: * Preset devices have been removed from Init mode. ## Version 11.0.0[​](#version-1100 "Direct link to Version 11.0.0") Release Page MetriCal Sensor Calibration Utilities repository for v11.0.0: ### Overview[​](#overview-10 "Direct link to Overview") This version bump is a big one! Data processing improvements, algorithmic improvements, CLI simplification... there's a little something for everyone here. Most of the changes in 11.0.0 were made based on testing from customers in the field. Thanks, everyone! Note that many CLI arguments have been shifted or removed. Be sure to check the changelog and updated documentation to make sure your settings remain. ### Changelog[​](#changelog-12 "Direct link to Changelog") #### Added[​](#added-7 "Direct link to Added") * Visualization improvements: * Render navigation states and IMU data by default. * Render all observations when log level is set to debug. * Change of Basis tooling: * Add `--topic-to-observation-basis` global argument to all modes in order to map a coordinate basis to each component's observations on plex construction ([docs](/metrical/14.1/commands/commands_overview#universal-options)). * Add `--topic-to-component-basis` global argument to all modes in order to map a coordinate basis to each component on plex construction ([docs](/metrical/14.1/commands/commands_overview#universal-options)). * Changed-basis plex is now included in MetriCal output ([docs](/metrical/core_concepts/constraints.md#to-and-from)). #### Changed[​](#changed-8 "Direct link to Changed") * Init mode only looks through the first 100 observations to create the initial plex; this should greatly speed up Init mode for most datasets. * All observations are now run through their respective detectors before image motion filtering occurs. * The motion filter acts on the image detections themselves, not the entire image. This should improve the quality of the motion filter in busy, "noisy" datasets ([docs](/metrical/commands/calibration/calibrate.md#the-motion-filter)). * A user-specified root is now required to generate the URDF from a plex in Shape mode ([docs](/metrical/commands/calibration/shape/shape_urdf.md)). * Reformulate IMU calibration approach: * Remove unnecessary IMU initialization. * Implement improvements to IMU optimization mathematics. * Only one IMU bias is inferred during calibration instead of a wandering bias. * Shape mode arguments now follow the order `metrical shape [command] [arguments] [plex] [output]` ([docs](/metrical/commands/calibration/shape/shape_overview.md)). * `license` and `report-path` arguments are now global arguments, and can be passed to any mode ([docs](/metrical/commands/commands_overview.md)). * Lower the minimum required points to fit a circle detector. #### Fixed[​](#fixed-8 "Direct link to Fixed") * Init mode now properly handles differences in the seeded plex (passed through with `-p`) and the created plex. This has been a bug for longer than we care to admit; we're glad it's fixed ([docs](/metrical/commands/calibration/init.md))! * Observations are no longer held in memory while motion filtering occurs. This greatly reduces memory usage on LiDAR-heavy datasets. #### Removed[​](#removed-6 "Direct link to Removed") * Caching of detections and filtered output is no longer supported in Calibrate and Evaluate modes. ## Version 10.0.0 (Yanked)[​](#version-1000-yanked "Direct link to Version 10.0.0 (Yanked)") Yanked! This release was yanked due to a critical bug introduced in [Init command](/metrical/commands/calibration/init.md) that wasn't caught in time. The odds of many users running into this bug were low, but we played it safe and yanked this version entirely. All relevant changes will be added to the changelog for 11.0.0. ## Version 9.0.0[​](#version-900 "Direct link to Version 9.0.0") Release Page MetriCal Sensor Calibration Utilities repository for v9.0.0: ### Overview[​](#overview-11 "Direct link to Overview") Version 9.0.0 can be considered a refinement of v8.0, with focus on improving the user experience and clarifying outputs. It also introduces a few new intrinsics models across components. Some default behavior has changed as well, so be sure to check the changelog for details. This release also includes a big update to rendering. Be sure to update your Rerun version to v0.14! ### Changelog[​](#changelog-13 "Direct link to Changelog") #### Added[​](#added-8 "Direct link to Added") * Introduced `LidarSummary` summary statistics to report lidar-specific metrics ([docs](/metrical/14.1/results/report#ss-3-lidar-summary-statistics)). * Support for new IMU intrinsics: Scale, Shear, Rotation, and G-Sensitivity ([docs](/metrical/calibration_models/imu.md)). * Support for the Omnidirectional camera model ([docs](/metrical/calibration_models/cameras.md#omnidirectional-omni)). #### Changed[​](#changed-9 "Direct link to Changed") * MetriCal now uses Rerun v0.14! 🎊 Make sure to update your version of Rerun accordingly. * The summary statistics table is now three tables, for optimization, cameras, and lidar respectively ([docs](/metrical/results/report.md#output-summary)). * `PerComponentRMSE` in Summary Statistics is now `CameraSummary` ([docs](/metrical/results/report.md#ss-002-camera-summary-statistics)). * Circle detector's `detect_interior_points` option is now a mandatory variable, and has no default value ([docs](/metrical/targets/target_overview.md)). * Circle detector now takes an `x_offset` and `y_offset` variable to describe the center of the circle w\.r.t. the full board frame ([docs](/metrical/targets/target_overview.md)). * Object relative extrinsics (OREs) are now generated by default. In turn, the `--enable-ore-inference` flag has been removed and replaced with `--disable-ore-inference` ([docs](/metrical/commands/calibration/calibrate.md#--disable-ore-inference)). * The camera component initialization process during calibration has been improved to better handle significant distortion. #### Fixed[​](#fixed-9 "Direct link to Fixed") * Rerun rendering code has been completely refactored for user clarity and speed of execution. * Lidar-lidar datasets are now rendered and registered along with camera-lidar. * Object relative extrinsics are now rendered when available. * Images now use lookup tables properly for quick correction. * Spaces have been reorganized for clarity and ease of use. * Datasets without cameras no longer print empty camera tables. ## Version 8.0.1[​](#version-801 "Direct link to Version 8.0.1") Release Page MetriCal Sensor Calibration Utilities repository for v8.0.1: ### Overview[​](#overview-12 "Direct link to Overview") This version fixes a small bug found in pipeline license validation. ### Changelog[​](#changelog-14 "Direct link to Changelog") #### Fixed[​](#fixed-10 "Direct link to Fixed") * `null` license values in a pipeline configuration are discarded, not interpreted as a provided license key. *** ## Version 8.0.0[​](#version-800 "Direct link to Version 8.0.0") Release Page MetriCal Sensor Calibration Utilities repository for v8.0.0: ### Overview[​](#overview-13 "Direct link to Overview") This release brings a ton of new features to the MetriCal CLI, most of them focused on improving the user experience. The biggest difference is one you won't see: all of the math done during optimization is now fully sparse, which means it takes a *lot* less memory to run a calibration. And smart convergence criteria means that calibrations are faster, too! We've also added a new mode, `pipeline`, which allows you to run a series of commands in serial. Find out more about it in the [Pipeline Command](/metrical/14.1/commands/pipeline) section of the documentation. ### Changelog[​](#changelog-15 "Direct link to Changelog") #### Added[​](#added-9 "Direct link to Added") * Pipeline mode. This executes a series of Commands in serial, as they're written in a pipeline JSON file. * Render the optimized plex at the end of a calibration. * The subplex ID for a spatial constraint now shows up in the Extrinsics table. * Input JSON files with comments are now accepted as valid. #### Changed[​](#changed-10 "Direct link to Changed") * All calibrations now undergo outlier detection and reweighting as part of the BA process. Outliers are detected for cameras, lidar, and relative extrinsics. * Summary table is sorted by component name, not by UUID. * Summary statistics in console are now computed using a weighted RMSE. * The bundle adjustment is now a fully sparse operation, relieving memory pressure on larger datasets. #### Fixed[​](#fixed-11 "Direct link to Fixed") * The height of the sync group chart now adjusts with the number of components present in the dataset. * Bug in Init mode when using a RealSense435Imu preset. * All stereo pairs are now derived and graphed in console output #### Removed[​](#removed-7 "Direct link to Removed") * The `--metrics-with-outliers` flag and the `--outlier-filter` flag have been removed. *** ## Version 7.0.1[​](#version-701 "Direct link to Version 7.0.1") Release Page MetriCal Sensor Calibration Utilities for v7.0.1: ### Overview[​](#overview-14 "Direct link to Overview") This patch release fixes various errata found in the [v7.0.0 release](https://gitlab.com/tangram-vision/platform/metrical/-/releases/v7.0.0). ### Changelog[​](#changelog-16 "Direct link to Changelog") #### Fixed[​](#fixed-12 "Direct link to Fixed") * Rendering the correction at the end of a calibration actually uses the corrected plex (rather than the input plex). * The extrinsics table now only shows delta values if an input plex is provided. * Camera-lidar extrinsics rendering in Rerun now takes the spatial constraint with the minimum covariance. This is different from the previous behavior where all spatial constraints were rendered, regardless of quality. *** ## Version 7.0.0[​](#version-700 "Direct link to Version 7.0.0") Release Page MetriCal Sensor Calibration Utilities for v7.0.0: ### Overview[​](#overview-15 "Direct link to Overview") Welcome, v7.0.0! Yes, merely a week after v6.0.0, we bump major versions. Classic. #### Revised Documentation[​](#revised-documentation "Direct link to Revised Documentation") It's finally here! The revised Tangram Vision documentation site is live: . This documentation site holds readmes and tutorials on all of Tangram Vision's products. The MetriCal section is fully-featured, and based on this release. We will be maintaining all documentation for this and newer versions on the official docs site from here on out. #### LiDAR-LiDAR Calibration[​](#lidar-lidar-calibration "Direct link to LiDAR-LiDAR Calibration") MetriCal now supports LiDAR-LiDAR calibration, no cameras needed. Users will need a [lidar circle target](/metrical/targets/target_overview.md) to calibrate LiDAR. #### New Modes - `pretty-print` and `evaluate`[​](#new-modes---pretty-print-and-evaluate "Direct link to new-modes---pretty-print-and-evaluate") v7.0.0 introduces two new modes: * Pretty Print does what it says on the tin: prints the plex or results of a calibration in a human-readable format. This is useful for debugging and for getting a quick overview of the calibration. Docs: [https://docs.tangramvision.com/metrical/commands/pretty\_print/](/metrical/13.2/modes/pretty_print) * Evaluate can apply a calibration to a given dataset and produce metrics to validate the quality of the calibration. Docs: [https://docs.tangramvision.com/metrical/commands/evaluate/](/metrical/12.0/modes/evaluate). The Calibrate mode, by extension, no longer has an `--evaluate` flag. This is just Evaluate mode. #### Revamped Rendering options + Deprecated Review mode[​](#revamped-rendering-options--deprecated-review-mode "Direct link to Revamped Rendering options + Deprecated Review mode") Review mode is no longer! Instead, passing the `--render` flag to either Calibrate or Evaluate mode will render the corrected calibration at the end of the run. Also, `--render` and `--render-socket` options are no longer global. Instead, they are only applicable for Calibrate and Evaluate modes. ### Changelog[​](#changelog-17 "Direct link to Changelog") #### Added[​](#added-10 "Direct link to Added") * Support for calibrating multi-LiDAR, no-camera datasets. This still leverages MetriCal's circle target, but no longer requires a camera or any image topics to be present in order to calibrate. * Pretty Print mode for printing out a plex in a human readable format. * Evaluate mode to evaluate the quality of a calibration on a test dataset. This is a reinterpretation of the `--evaluate` flag that was in the Calibrate mode; it's just been given its own command for ease of use. * A `verbose` flag to the shape mode to print the plex that has been created from the command. * Additional descriptions to errors that can be generated by calling the calibrate mode. #### Changed[​](#changed-11 "Direct link to Changed") * Calibrate + Evaluate mode now renders its own corrections (rather than punting that capability to review). * The `--render` flag at the global level has been moved to the Calibrate & Evaluate modes. #### Removed[​](#removed-8 "Direct link to Removed") * Review mode and README mode have been removed completely. Review mode's previous functionality is now split between Pretty Print mode and Calibrate mode. * The `--evaluate` flag in Calibrate mode. #### Fixed[​](#fixed-13 "Direct link to Fixed") * Summary statistics tables now have the correct units displayed alongside their quantities. * Printing the results of a plex with no extrinsics will now print an empty table, rather than nothing at all. * Nominal extrinsics deltas in tables are now represented by the string "--". #### Errata[​](#errata "Direct link to Errata") * The "corrected" rendering at the end of a calibration run mistakenly uses the input plex, not the output plex. Scheduled fix: v7.0.1. * The output extrinsics table does not correctly calculate the delta between the input and output plex. Scheduled fix: v7.0.1. * Rendering the corrected camera-lidar registration would take the first spatial constraints available, rather than taking the constraint with the minimum covariance. This often makes calibrations appear wildly incorrect, despite a good calibration. Scheduled fix: v7.0.1. --- # MetriCal Command: Calibrate ## Usage[​](#usage "Direct link to Usage") Calibrate - CLI Example ``` metrical calibrate [OPTIONS] \ \ \ ``` Calibrate - Manifest Example ``` command = "calibrate" dataset = "{{variables.dataset}}" input-plex = "{{init-stage.initialized-plex}}" input-object-space = "{{variables.object-space}}" camera-motion-threshold = "disabled" lidar-motion-threshold = "disabled" optimization-profile = "standard" preserve-input-constraints = false disable-ore-inference = false overwrite-detections = false override-diagnostics = false render = false detections = "{{auto}}" results = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") The Calibrate command runs a full bundle adjustment over the input calibration data. It requires three main arguments: * The input data. * An initial plex (usually derived from [Init command](/metrical/commands/calibration/init.md)). This represents a naive guess at the state of your system. * An [object space](/metrical/core_concepts/object_space_overview.md) file. This tells MetriCal what targets to look for during calibration. We mean it when we say this runs a *full* bundle adjustment. MetriCal will optimize both the plex values (the calibration of your system, effectively) *and* the object space values. This means that MetriCal is robust to bent boards, misplaced targets, etc. All of these values will be solved for during the adjustment, and your calibration results will be all the better for it. We use ANSI terminal codes for colorizing output according to an internal assessment of metric quality. Generally speaking: * Cyan: spectacular * Green: good * Orange: okay, but generally poor * Red: bad Note that these judgments are applied internally to the software and have limits that may change release-to-release. We advise that you determine your own internal limits for each independent metric. ### Cached Detections[​](#cached-detections "Direct link to Cached Detections") Extracting detections from a dataset usually takes the lion's share of time during a run. To make this less onerous on subsequent runs, MetriCal will automatically create a cache of detections in MCAP format. It will then save them to the same directory as the input dataset, as well as recall that data on subsequent runs of the same calibration. Detections rely on Component UUIDs Detections rely on matching UUIDs between its data and the components in the plex to correctly attribute observations. This means that *generating new UUIDs for an initialized plex will require new detections*, since the UUIDs will have changed. ### Extracting Optimized Values[​](#extracting-optimized-values "Direct link to Extracting Optimized Values") There is a *lot* of data in the MetriCal results. Luckily, it's all MCAP, and it's fairly easy to pick apart. Use the [MCAP CLI tool](https://mcap.dev/guides/cli) to extract this data into its own file for easy analysis and comparison to the original inputs. You can also read more about the results MCAP format in its [documentation page](/metrical/results/output_file.md). ``` # Retrieve the optimized plex mcap get attachment --name optimized-plex results.mcap > optimized_plex.json # Retrieve the optimized object space mcap get attachment --name optimized-object-space results.mcap > optimized_object.json ``` ### The Motion Filter[​](#the-motion-filter "Direct link to The Motion Filter") MetriCal runs a motion filter over any and all images and point clouds in the input dataset. This filter removes any features that might be affected by motion blur, rolling shutter, false detections, or other artifacts. This is a critical step in the calibration process, as it ensures that the data used for calibration is the best we can get. Both cameras and lidar have motion filters that get run automatically for every calibration. The thresholds for motion must be set before every calibration run. MetriCal requires this for two reasons: 1. Smart motion filtering can dramatically cut down on calibration time and make processes more efficient 2. We want you to know a motion filter even exists! It can be jarring to run a calibration and see half of your data was skipped over. See the breakdown of camera and lidar motion thresholds below for more on those values. There are three states that the motion filter can assign to a component. | State | Description | | ----------------- | ---------------------------------------------------------------------------------------------------------------------------- | | In Motion | The movement between subsequent detections is *above* the motion threshold. | | Still | The movement between subsequent detections is *below* the motion threshold. | | Still (Redundant) | The component is Still *and* the movement between detections is small enough that they can be considered the same detection. | ![Motion States over time](/assets/images/motion_state_timeline_v12-b70ea6416820079f7f96125780af6ac3.png) The motion filter works at the sync group level, not component-by-component. You can see how this decision affects the filter's behavior over time. If one component is moving, the motion filter labels all other components as moving as well, regardless of their actual detected motion status. This decision makes intuitive sense: in a calibration dataset, all components (or all object spaces) should be moving together. Snapshots and Inconsistent Timestamps The motion filter inherently assumes a stream is producing high-frequency observations with consistent timestamps, e.g. a 30fps camera. For observation streams with inconsistent or infrequent observations, even a low motion threshold can filter everything! It can also be a sign of malformed timestamps. If either of these things sound like your data, you should disable the corresponding motion filter entirely. ### Camera Motion Threshold[​](#camera-motion-threshold "Direct link to Camera Motion Threshold") The image motion filter sets its threshold based on the average flow of detected features between subsequent frames. Background motion is not considered, so you can still use this filter when taking data in a busy place. Any motion that results in an average pixel shift higher than the camera motion threshold (`--camera-motion-threshold`) is considered *In Motion*. In addition to assigning a custom non-negative number, there are a number of predefined profiles that can be used for convenience: | Value | Alias | Meaning | | ---------- | ----------------- | -------------------------------------------------------------------------------------------------------- | | `strict` | `rolling-shutter` | 1 pixels/frame. A low threshold, necessary for rolling shutter cameras. | | `lenient` | `motion-cal` | 10 pixels/frame. A higher threshold, suitable for calibrations that require some motion (IMU, odometry). | | `disabled` | N/A | No motion filtering, disabled for camera streams entirely | ![Motion States for Cameras](/assets/images/motion_state_camera_v12-dff1edfacd3ba24b06f37f1379577fa7.png) ### Lidar Motion Threshold[​](#lidar-motion-threshold "Direct link to Lidar Motion Threshold") The lidar motion filter sets its threshold based on the geometric movement of the detected circle center between subsequent observations. Any motion that results in an average distance shift higher than the lidar motion threshold (`--lidar-motion-threshold`) is considered *In Motion*. In addition to assigning a custom non-negative number, there are a number of predefined profiles that can be used for convenience: | Value | Alias | Meaning | | ---------- | ------------ | -------------------------------------------------------------------------------------------------------------------- | | `strict` | `poor-sync` | 0.02 meters/obs. A low threshold, necessary when there is poor synchronization between lidars or camera-lidar pairs. | | `lenient` | `motion-cal` | 0.1 meters/obs. A higher threshold, suitable for calibrations that require some motion (IMU, odometry). | | `disabled` | N/A | No motion filtering, disabled for lidar streams entirely | ![Motion States for Lidar](/assets/images/motion_state_lidar-e8c857bdb448f715cead0e0c0e9235cf.png) ## Examples[​](#examples "Direct link to Examples") #### Run a calibration. Save the results to `results.mcap`[​](#run-a-calibration-save-the-results-to-resultsmcap "Direct link to run-a-calibration-save-the-results-to-resultsmcap") > ``` > metrical calibrate \ > --results results.mcap \ > $DATA $PLEX $OBJSPC > ``` #### Visualize detections[​](#visualize-detections "Direct link to Visualize detections") > Make sure you have a Rerun server running! > > ``` > metrical calibrate \ > --render \ > $DATA $PLEX $OBJSPC > ``` #### Save calibration log output[​](#save-calibration-log-output "Direct link to Save calibration log output") > There's so much data in a calibration log. This is where the global [`--report-path`](/metrical/commands/global_arguments.md#--report-path-report_path) flag really comes in handy; you can save all of that log output to an HTML file for easy viewing. *Note*: Using a manifest will write a report for the entire run automatically into the workspace directory, no `--report-path` argument needed. > > ``` > metrical calibrate --report-path metrical.html $DATA $PLEX $OBJSPC > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[DATASET][​](#dataset "Direct link to \[DATASET]") > The dataset with which to calibrate, or the detections from an earlier run. Users can pass MCAP files (usually with an `.mcap` extension) or a top-level directory containing a set of nested directories for each topic #### \[INPUT\_PLEX][​](#input_plex "Direct link to \[INPUT_PLEX]") > The input plex path (initialized plex, optimized plex, or calibrate results MCAP) #### \[INPUT\_OBJECT\_SPACE][​](#input_object_space "Direct link to \[INPUT_OBJECT_SPACE]") > Path to object space description (JSON file or calibrate results MCAP) ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### --camera-motion-threshold \[PROFILE|VALUE][​](#--camera-motion-threshold-profilevalue "Direct link to --camera-motion-threshold \[PROFILE|VALUE]") > This threshold is used for filtering camera data based on detected feature motion in the image. An image is considered still if the average delta in features between subsequent frames is below this threshold. The units for this threshold are in pixels/frame. > > Predefined profiles: > > * `strict` or `rolling-shutter`: A low threshold, necessary for rolling shutter cameras. (1 pixels/frame) > * `lenient` or `motion-cal`: A higher threshold, suitable for calibrations that require some motion (IMU, odometry). (10 pixels/frame) > * `disabled`: No motion filtering, disabled for camera streams entirely. (inf pixels/frame) > * Or a custom non-negative numeric value in pixels/frame. #### --lidar-motion-threshold \[PROFILE|VALUE][​](#--lidar-motion-threshold-profilevalue "Direct link to --lidar-motion-threshold \[PROFILE|VALUE]") > This threshold is used for filtering lidar data based on detected feature motion in the point cloud's detected circle center. A point cloud is considered still if the average delta in metric space between subsequent detected circle centers is below this threshold. The units for this threshold are in meters/observation. > > Predefined profiles: > > * `strict` or `poor-sync`: A low threshold, necessary when there is poor synchronization between lidar and camera. (0.02 meters/obs) > * `lenient` or `motion-cal`: A higher threshold, suitable for calibrations that require some motion (IMU, odometry). (0.1 meters/obs) > * `disabled`: No motion filtering, disabled for lidar streams entirely. (inf meters/obs) > * Or a custom non-negative numeric value in meters/obs. #### --optimization-profile \[OPTIMIZATION\_PROFILE][​](#--optimization-profile-optimization_profile "Direct link to --optimization-profile \[OPTIMIZATION_PROFILE]") > **Default: standard** > The optimization profile of the bundle adjustment. This is a high-level setting that controls the optimization's maximum iterations, relative cost threshold, and absolute cost threshold. > > Possible values: > > * **performance**: Use this argument when speed is necessary > * **standard**: This is a balanced profile between speed and accuracy > * **minimize-error**: This profile maximizes accuracy. For some small datasets, this profile may be just as fast as the standard profile. However, for larger datasets, this profile may take significantly longer to converge #### --opt-max-iterations \[OPT\_MAX\_ITERATIONS][​](#--opt-max-iterations-opt_max_iterations "Direct link to --opt-max-iterations \[OPT_MAX_ITERATIONS]") > The maximum iteration count on the bundle adjustment. This is a hard limit on the number of iterations for the bundle adjustment. If the optimization does not converge within this number of iterations, the optimization will stop and return the current state of the optimization. #### --opt-relative-threshold \[OPT\_RELATIVE\_THRESHOLD][​](#--opt-relative-threshold-opt_relative_threshold "Direct link to --opt-relative-threshold \[OPT_RELATIVE_THRESHOLD]") > The relative cost termination threshold for the bundle adjustment. "Relative cost" is the difference in cost between iterations. If the relative cost between iterations is below the threshold, the calibration will exit #### --opt-absolute-threshold \[OPT\_ABSOLUTE\_THRESHOLD][​](#--opt-absolute-threshold-opt_absolute_threshold "Direct link to --opt-absolute-threshold \[OPT_ABSOLUTE_THRESHOLD]") > The absolute cost termination threshold for the bundle adjustment. "Absolute cost" is the value of the cost at a given iteration of the optimization. In practice, this value is hardly ever reached; the optimization will hit a local minima well above this value and exit #### --preserve-input-constraints[​](#--preserve-input-constraints "Direct link to --preserve-input-constraints") > Preserve the spatial constraints that were input for this dataset. > > The default behavior is to discard any constraints that were not derived from the data (e.g. spatial constraints from sync groups) or optimized. Pass this flag if the output plex should include any constraints that were input but not found or optimized. #### --disable-ore-inference[​](#--disable-ore-inference "Direct link to --disable-ore-inference") > Disable inference of object relative extrinsic constraints in the bundle adjustment. > > The default behavior is to infer object relative extrinsic constraints in the bundle adjustment. Pass this flag if object relative extrinsic constraints should not be inferred; this may save a substantial amount of time in datasets using many object spaces. > > It's generally expected that disabling ORE inference will not affect calibration accuracy. The main downside to setting this flag is that MetriCal will not output the object space intrinsics as part of the calibration results, which may disable visualization of the target poses when using `metrical display`. #### -y, --overwrite-detections[​](#-y---overwrite-detections "Direct link to -y, --overwrite-detections") > Overwrite the detections at this location, if they exist. #### --override-diagnostics[​](#--override-diagnostics "Direct link to --override-diagnostics") > When this flag is set to true, a calibration dataset that presents data diagnostics errors will still output a calibration result. #### -r, --render[​](#-r---render "Direct link to -r, --render") > Whether to visualize the detections using Rerun. This requires a working installation of Rerun on your machine. You may also need to start a rerun server manually if you are running this in Docker. > > Read more about configuring Rerun [here](/metrical/configuration/visualization.md). #### --render-socket \[RENDER\_SOCKET][​](#--render-socket-render_socket "Direct link to --render-socket \[RENDER_SOCKET]") > The web socket address on which Rerun is listening. This should be an IP address and port number separated by a colon, e.g. `--render-socket="127.0.0.1:3030"`. By default, Rerun will listen on socket `host.docker.internal:9876`. If running locally (not via Docker), Rerun's default port is `127.0.0.1:9876` > > When running Rerun from its CLI, the IP would correspond to its `--bind` option and the port would correspond to its `--port` option. #### --detections \[DETECTIONS][​](#--detections-detections "Direct link to --detections \[DETECTIONS]") > This argument acts both as input and output to MetriCal. > > If there are no detections at this path, it is assumed to be an output path, and detections will be written here after being computed. > > If there are detections at this path, they will be read in and used as input to the calibration. In this case, no new detections will be computed, and the existing detections will be used as-is. > > The `--overwrite-detections` flag will force overwriting the detections at this path if they exist. #### -o, --results \[RESULTS\_PATH][​](#-o---results-results_path "Direct link to -o, --results \[RESULTS_PATH]") > Default: path/to/dataset/\[name\_of\_dataset].results.mcap > > The output path to save the final results of the program, in MCAP format. --- # MetriCal Command: Consolidate ## Usage[​](#usage "Direct link to Usage") Consolidate - CLI Example ``` metrical consolidate [OPTIONS] ``` Consolidate - Manifest Example ``` command = "consolidate" input-object-space = "{{calibrate-stage.results}}" overwrite = false render = false consolidated-object-space = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") The Consolidate command takes a results file or an object space file that contains object relative extrinsics (OREs) and uses those transformations to combine all object spaces into a single, unified object space. Narrow FOV Calibration This command is particularly useful during narrow field-of-view (FOV) calibration scenarios, where it's almost impossible to achieve a sufficient target depth without combining object spaces. Take a look at our [Narrow FOV calibration guide](/metrical/calibration_guides/narrow_fov_cal.md) for more information. This command is particularly useful for calibration scenarios with minimal sensor or target overlap. In such cases, users would typically: 1. Run an initial "survey" calibration to determine object relative extrinsics 2. Use this command to consolidate the results into a single object space 3. Run a full calibration using the consolidated object space 4. Use this reference in subsequent calibrations to ensure consistency By consolidating object spaces, you can effectively combine calibration data from multiple scenes or captures, even when there isn't significant overlap between all sensors. ![Surveying your object spaces](/assets/images/consolidate-d4534ae0ce33dbd762cff6f21832dcc2.png) The ability of MetriCal to consolidate object spaces depends on the presence of object relative extrinsics in the source object space or results file. Object space files are usually created manually with basic target parameters (type, width, height, marker size, etc.) but often without any extrinsics. Results files, on the other hand, are generated by MetriCal during calibration and typically contain extrinsics of targets that are observed simultaneously by one or more sensors in the dataset. It is these extrinsics that MetriCal uses to consolidate object spaces, and thus consolidation is dependent on the data capture process. Targets will be grouped together based on the simultaneous target observations in the data. MetriCal will stitch together target groups that were not directly observed together, if there is a chain of mutual observations connecting them. So if we see target A and B together in some images, and targets B and C together in other images, MetriCal can infer the relationship between A and C, even if they were never observed together. If there are no mutual observations of a target, it will be left unconsolidated. Also it's possible to have multiple disjoint groups of consolidated targets if there are no mutual observations connecting them. ## Examples[​](#examples "Direct link to Examples") #### Consolidate object spaces from a calibration result, visualizing the result[​](#consolidate-object-spaces-from-a-calibration-result-visualizing-the-result "Direct link to Consolidate object spaces from a calibration result, visualizing the result") > ``` > metrical consolidate \ > --output-path consolidated.json \ > --render \ > results.json > ``` > > ![consolidated object space](/assets/images/consolidated_object_space-1c135cd4cc9a50a5c0d575585dc8e640.png) #### Use the consolidated object space in a subsequent calibration[​](#use-the-consolidated-object-space-in-a-subsequent-calibration "Direct link to Use the consolidated object space in a subsequent calibration") > After creating a consolidated object space, you can use it in your calibration: > > ``` > metrical calibrate \ > --output-json final_results.json \ > $DATA $PLEX consolidated.json > ``` #### Force overwrite of an existing consolidated object space file[​](#force-overwrite-of-an-existing-consolidated-object-space-file "Direct link to Force overwrite of an existing consolidated object space file") > ``` > metrical consolidate \ > --overwrite-consolidated-objects \ > --output-path consolidated.json \ > results.json > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[OBJECT\_SPACE\_OR\_RESULTS\_PATH][​](#object_space_or_results_path "Direct link to \[OBJECT_SPACE_OR_RESULTS_PATH]") > A path pointing to a description of the object space or a MetriCal results JSON file. For this command to work effectively, the input file should contain object relative extrinsics that relate multiple object spaces together. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### -o, --consolidated-object-space \[CONSOLIDATED\_OBJECT\_SPACE\_PATH][​](#-o---consolidated-object-space-consolidated_object_space_path "Direct link to -o, --consolidated-object-space \[CONSOLIDATED_OBJECT_SPACE_PATH]") > **Default: consolidated\_objects.json** > > The output path to save the consolidated object space, in JSON format. If a directory is provided, the file will be created as "consolidated\_objects.json" within that directory. #### -y, --overwrite[​](#-y---overwrite "Direct link to -y, --overwrite") > **Default: false** > > Overwrite an existing consolidated object space file if it exists at the specified location. If this flag is not set and the output file already exists, the operation will fail with an error. #### -r, --render[​](#-r---render "Direct link to -r, --render") > **Default: false** > > Render the consolidated object space in Rerun #### --render-socket[​](#--render-socket "Direct link to --render-socket") > The web socket address on which Rerun is listening. This should be an IP address and port number separated by a colon, e.g. `--render-socket="127.0.0.1:3030"`. By default, Rerun will listen on socket `host.docker.internal:9876`. If running locally (not via Docker), Rerun's default port is `127.0.0.1:9876` When running Rerun from its CLI, the IP would correspond to its `--bind` option and the port would correspond to its `--port` option. --- # MetriCal Command: Init ## Usage[​](#usage "Direct link to Usage") Init - CLI Example ``` metrical init [OPTIONS] ``` Init - Manifest Example ``` command = "init" dataset = "{{variables.dataset}}" reference-source = [] topic-to-model = [ ["topic", "model"], ["topic2", "model2"], ... ] remap-reference-component = [] overwrite-strategy = "preserve" uuid-strategy = "inherit-reference" initialized-plex = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") Init will infer all components, spatial constraints, and temporal constraints based on the observations and interactions in the dataset. It will then write this information out in plex form to a JSON file (listed as `$INIT_PLEX` in code samples below). This is our Initialized Plex! MetriCal will use this plex as the initialization to the [Calibrate](/metrical/commands/calibration/calibrate.md) command. Let's break down this example Init command: ``` metrical init -m *cam_ir*:opencv-radtan -m *cam_color*:eucm -m *lidar*:lidar $DATA $INIT_PLEX ``` * Assign all topics/folders that match the `*cam_ir*` pattern to the OpenCV Radtan model. * Assign all topics/folders that match the `*cam_color*` pattern to the EUCM model. * Assign all topics/folders that match the `*lidar*` pattern to the No Offset model (just extrinsics) * Use the data in `$DATA` to initialize the plex. MetriCal will discard information from any topics that aren't assigned a model using the command line. See all of the models available to MetriCal in the [`--topic-to-model`/`-m` documentation](#available-models). ### Plex Reference Sources[​](#plex-reference-sources "Direct link to Plex Reference Sources") If you have a good idea of some aspect of your system, you can actually provide Init mode with a Plex Reference. A Plex Reference can come in the form of another plex (possibly from a previous calibration) or a URDF XML. It's common to provide a "golden plex" for initialization of new systems this way. ``` REF_PLEX=golden_plex.json metrical init --reference-source $REF_PLEX $DATA $INIT_PLEX ``` All components, spatial constraints, and temporal constraints that are in the reference plexes will be applied to the initialized plex. Spatial constraints are applied to the initialized plex only if the names of the topics in the plex reference match one of the components mapped by the [`-topic-to--model`](#-m---topic-to-model) arguments. All this being said, the data always comes first. If any figures in a plex reference conflict with the input data, the initialized plex will be modified to a "best guess" value by MetriCal. An example of this would be changing the image size of a camera based on the observations in the dataset, but keeping any existing spatial constraints intact. #### Plex Reference Application[​](#plex-reference-application "Direct link to Plex Reference Application") You can apply as many plex references as you would like to Init before running the command: ``` metrical init --reference-source $PLEX_ONE \ --reference-source $URDF_ONE \ --reference-source $PLEX_TWO \ $DATA $INIT_PLEX ``` In this case, MetriCal will apply each Plex Reference in the order that they were provided. This also means that components and constraints that show up in multiple plex references may be overwritten as each reference is applied. ### Overwrite Strategy[​](#overwrite-strategy "Direct link to Overwrite Strategy") By default, MetriCal will prevent you from overwriting a plex if one exists at the initialized plex path. However, this behavior can be changed depending on your operating environment by defining the `--overwrite-strategy` argument. * `warn`: Default value. Preserve existing output file if it exists. Exit with an error if an overwrite is attempted. * `preserve`: Preserve existing output file if it exists. Don't throw an error, just exit quietly. * `replace`: Replace existing output file completely. Redefining this overwrite strategy can be helpful in manifest or scripting scenarios. ### UUID Strategy[​](#uuid-strategy "Direct link to UUID Strategy") Plexes assign UUIDs to every component in order to differentiate *that* component with *another* component of the same name/topic from another run, as they can represent different hardware sets or systems in real life. However, a few problems arise when considering using plex references: * If every plex reference has a different UUID to describe the same topics, what UUIDs should be used in the ultimate initialized plex? * What if my plex reference *does* reference the same hardware as my plex references? MetriCal provides a configurable UUID strategy to answer this question. * `inherit-existing`: Default value. Inherit existing UUIDs from the existing initialized plex, if available. This will preserve the ability to re-run detections. If no existing initialized plex is available, this behaves like `inherit-reference`. * `inherit-reference`: Generate new UUIDs from reference sources, if available. Don't preserve existing UUIDs from an existing initialized plex. If there are no reference sources, this behaves like `generate`. * `generate`: Generate all new UUIDs regardless of the existing initialized plex or sources. ## Examples[​](#examples "Direct link to Examples") #### Create a plex from input data[​](#create-a-plex-from-input-data "Direct link to Create a plex from input data") Use OpenCV's Brown-Conrady model for all topics whose name starts with the substring `/camera/`. ``` metrical init --topic-to-model /camera/*:opencv-radtan $DATA $INIT_PLEX ``` #### Reference a "golden" plex from a previous calibration[​](#reference-a-golden-plex-from-a-previous-calibration "Direct link to Reference a \"golden\" plex from a previous calibration") In this scenario, we're telling MetriCal to use a plex from a previous calibration as a starting point, but to assign brand new UUIDs to the initialized plex at the end. This can be handy in a production line scenario, where calibration from similar systems must be audited across their product lifecycle. ``` metrical init \ --reference-source $GOLDEN_PLEX \ --uuid-strategy generate \ $DATA $INIT_PLEX ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[DATASET][​](#dataset "Direct link to \[DATASET]") > The dataset with which to calibrate, or the detections from an earlier run. Users can pass MCAP files (usually with an `.mcap` extension) or a top-level directory containing a set of nested directories for each topic #### \[INITIALIZED\_PLEX][​](#initialized_plex "Direct link to \[INITIALIZED_PLEX]") > The initialized plex output from this step. Default: initialized-plex.json. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### -p, --reference-source \[REFERENCE\_SOURCE][​](#-p---reference-source-reference_source "Direct link to -p, --reference-source \[REFERENCE_SOURCE]") > The path to a reference source, which informs Init mode of any known models, spatial constraints, or temporal constraints. This can be a MetriCal results file, a plex, or a URDF #### -m, --topic-to-model[​](#-m---topic-to-model "Direct link to -m, --topic-to-model") > A mapping of topic/folder names to models in the input plex. NOTE: All topics intended for calibration *must* be enumerated by this argument. If an initial plex is provided, any matching topic models in the plex will be overwritten. > > Example: We would like the topic "camera\_1" to be modeled using OpenCV's regular distortion model (aka Brown Conrady), and "camera\_2" to be modeled using its fisheye model: > > ``` > -m camera_1:opencv-radtan -m camera_2:opencv-fisheye > ``` > > One may also use wildcards to designate many topics at a time: > > ``` > -m /camera/\*:opencv-radtan > ``` > > #### Available models[​](#available-models "Direct link to Available models") > > ##### Cameras[​](#cameras "Direct link to Cameras") > > * [no-distortion](/metrical/calibration_models/cameras.md#no-distortion) > * [pinhole-with-brown-conrady](/metrical/calibration_models/cameras.md#pinhole-with-inverse-brown-conrady) > * [pinhole-with-kannala-brandt](/metrical/calibration_models/cameras.md#pinhole-with-inverse-kannala-brandt) > * [opencv-radtan](/metrical/calibration_models/cameras.md#opencv-radtan) > * [opencv-fisheye](/metrical/calibration_models/cameras.md#opencv-fisheye) > * [opencv-rational](/metrical/calibration_models/cameras.md#opencv-rational) > * [eucm](/metrical/calibration_models/cameras.md#eucm) > * [double-sphere](/metrical/calibration_models/cameras.md#double-sphere) > * [omni](/metrical/calibration_models/cameras.md#omnidirectional-omni) > * [power-law](/metrical/calibration_models/cameras.md#power-law) > > ##### Lidar[​](#lidar "Direct link to Lidar") > > * [lidar](/metrical/calibration_models/lidar.md) > > ##### IMU[​](#imu "Direct link to IMU") > > * [scale](/metrical/calibration_models/imu.md#imu-model-descriptions) > * [scale-shear](/metrical/calibration_models/imu.md#imu-model-descriptions) > * [scale-shear-rotation](/metrical/calibration_models/imu.md#imu-model-descriptions) > * [scale-shear-rotation-g-sensitivity](/metrical/calibration_models/imu.md#imu-model-descriptions) > > ##### Local Navigation Systems[​](#local-navigation-systems "Direct link to Local Navigation Systems") > > * [lns](/metrical/calibration_models/local_navigation.md) #### -T, --remap-reference-component \[REMAP\_REFERENCE\_COMPONENT][​](#-t---remap-reference-component-remap_reference_component "Direct link to -T, --remap-reference-component \[REMAP_REFERENCE_COMPONENT]") > Remaps a component from the reference plex to a new topic name in the given dataset. > > Can take a string of either format `old_component_name:new_topic_name` or `old_component_uuid:new_topic_name`. #### --overwrite-strategy \[OVERWRITE\_STRATEGY][​](#--overwrite-strategy-overwrite_strategy "Direct link to --overwrite-strategy \[OVERWRITE_STRATEGY]") > The strategy for overwriting an existing output plex. > > Default: `warn` > > Possible values: > > * `warn`: Preserve existing output file if it exists. Exit with an error if an overwrite is attempted > * `preserve`: Preserve existing output file if it exists. Don't throw an error, just exit quietly > * `replace`: Replace existing output file completely #### --uuid-strategy \[UUID\_STRATEGY][​](#--uuid-strategy-uuid_strategy "Direct link to --uuid-strategy \[UUID_STRATEGY]") > The strategy for handling UUIDs in the output plex. > > Default: `inherit-existing` > > Possible values: > > * `inherit-existing`: Inherit existing UUIDs from the existing output plex, if available. This will preserve the ability to re-run detections. If no existing output plex is available, this behaves like `inherit-reference`. > * `inherit-reference`: Generate new UUIDs from reference sources, if available. Don't preserve existing UUIDs from an existing output plex. If there are no reference sources, this behaves like `generate`. > * `generate`: Generate all new UUIDs regardless of existing output or sources --- # Focus ## Usage[​](#usage "Direct link to Usage") Focus - CLI Example ``` metrical shape focus [OPTIONS] --component ``` Focus - Manifest Example ``` command = "shape-focus" input-plex = "{{calibrate-stage.results}}" component = "/rs/imu" shaped-plex = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") This Shape command creates a plex in which all components are spatially connected to only one origin component/object space ID. In other words, one component acts as a "focus", with spatial constraints to all other components. Notice that this command will compose two or more spatial constraints to achieve the optimal "path" between two components in space. ![An illustration of a plex created through Focus command](/assets/images/shape_focus-b2683baa26f761dbdd56f38f4215130c.png) ## Examples[​](#examples "Direct link to Examples") #### Create a plex focused around component `ir_one`[​](#create-a-plex-focused-around-component-ir_one "Direct link to create-a-plex-focused-around-component-ir_one") > ``` > metrical shape focus --component ir_one $RESULTS $SHAPED_PLEX > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_PLEX][​](#input_plex "Direct link to \[INPUT_PLEX]") > The path to the input plex. This can be a MetriCal Results MCAP or a plex JSON. #### \[SHAPED\_PLEX][​](#shaped_plex "Direct link to \[SHAPED_PLEX]") > The output file path for this shaped plex. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### -a, --component \[COMPONENT][​](#-a---component-component "Direct link to -a, --component \[COMPONENT]") > The component around which this plex is focused. --- Code Samples For code samples of LUT-based image manipulation, please check out our [lut-examples](https://gitlab.com/tangram-vision/oss/lut-examples) repository. # Lookup Table ## Usage[​](#usage "Direct link to Usage") Lookup Table - CLI Example ``` metrical shape lut [OPTIONS] --camera ``` Lookup Table - Manifest Example ``` command = "shape-lut" input-plex = "{{calibrate-stage.results}}" format = "json|msgpack" camera = "camera_to_generate_lut_for" artifact = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") It's not always easy to adopt a new camera model. Sometimes, you just want to apply a correction and not have to worry about getting all of the math right. The LUT subcommand gives you a shortcut: it describes the locations of the mapped pixel values required to apply a calibration to an entire image. Note that a lookup table only describes the correction for an image of the same dimensions as the calibrated image. If you're trying to downsample or upsample an image, you'll need to derive a new lookup table for that image dimension. These lookup tables can be used as-is using OpenCV's lookup table routines; see [this open-source repo on applying lookup tables in OpenCV](https://gitlab.com/tangram-vision/oss/lut-examples) for an example. ## Examples[​](#examples "Direct link to Examples") #### Create a correction lookup table for camera `ir_one`[​](#create-a-correction-lookup-table-for-camera-ir_one "Direct link to create-a-correction-lookup-table-for-camera-ir_one") > ``` > metrical shape lut --camera ir_one > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_PLEX][​](#input_plex "Direct link to \[INPUT_PLEX]") > The path to the input plex. This can be a MetriCal Results MCAP or a plex JSON. #### \[ARTIFACT][​](#artifact "Direct link to \[ARTIFACT]") > The artifacts output location. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### -a, --camera \[CAMERA][​](#-a---camera-camera "Direct link to -a, --camera \[CAMERA]") > The camera to generate a LUT for. Must be a camera component reference such as a UUID or component name. #### -f, --format \[FORMAT][​](#-f---format-format "Direct link to -f, --format \[FORMAT]") > **Default: `json`** > > What serializer to use to format the output. Possible values: > > * json: Output the shape data as JSON > * msgpack: Output the shape data as MsgPack --- Code Samples For code samples of LUT-based image manipulation, please check out our [lut-examples](https://gitlab.com/tangram-vision/oss/lut-examples) repository. # Stereo Lookup Table ## Usage[​](#usage "Direct link to Usage") Stereo Lookup Table - CLI Example ``` metrical shape stereo-lut [OPTIONS] \ --dominant \ --secondary \ ``` Stereo Lookup Table - Manifest Example ``` command = "shape-stereo-lut" input-plex = "{{calibrate-stage.results}}" format = "json|msgpack" dominant = "camera_to_use_as_dominant_eye" secondary = "camera_to_use_as_secondary_eye" artifact = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") This command creates two pixel-wise lookup tables to produce a stereo rectified pair based on an existing calibration. Rectification occurs in a hallucinated frame with a translation at the origin of the dominant eye, but with a rotation halfway between the dominant frame and the secondary frame. Rectification is the process of transforming a stereo pair of images into a common plane. This is useful for a ton of different applications, including feature matching and disparity estimation. Rectification is often a prerequisite for other computer vision tasks. The Stereo LUT subcommand will create two lookup tables, one for each camera in the stereo pair. These LUTs are the same format as the ones created by the `lut` command, but they map the pixel-wise shift needed to move each image into that common plane. The result is a pair of rectified images! The Stereo LUT subcommand will also output the values necessary to calculate depth from disparity values. ## Examples[​](#examples "Direct link to Examples") #### Create a rectification lookup table between a stereo pair[​](#create-a-rectification-lookup-table-between-a-stereo-pair "Direct link to Create a rectification lookup table between a stereo pair") > ``` > metrical shape stereo-lut --dominant ir_one --secondary ir_two \ > > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_PLEX][​](#input_plex "Direct link to \[INPUT_PLEX]") > The path to the input plex. This can be a MetriCal Results MCAP or a plex JSON. #### \[ARTIFACT][​](#artifact "Direct link to \[ARTIFACT]") > The artifacts output location. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### -a, --dominant \[DOMINANT][​](#-a---dominant-dominant "Direct link to -a, --dominant \[DOMINANT]") > The dominant eye in this stereo pair. Must be a camera component specifier (UUID or component name) #### -b, --secondary \[SECONDARY][​](#-b---secondary-secondary "Direct link to -b, --secondary \[SECONDARY]") > The secondary eye in this stereo pair. Must be a camera component specifier (UUID or component name) #### -f, --format \[FORMAT][​](#-f---format-format "Direct link to -f, --format \[FORMAT]") > **Default: `json`** > > What serializer to use to format the output. Possible values: > > * json: Output the shape data as JSON > * msgpack: Output the shape data as MsgPack ## Output Schema[​](#output-schema "Direct link to Output Schema") Loading .... [Raw schema file](/assets/files/stereo_rectifier_schema-7d3105fdca9f2862cb70c99dffba0555.json) --- # Minimum Spanning Tree ## Usage[​](#usage "Direct link to Usage") Minimum Spanning Tree - CLI Example ``` metrical shape mst [OPTIONS] ``` Minimum Spanning Tree - Manifest Example ``` command = "shape-mst" input-plex = "{{calibrate-stage.results}}" shaped-plex = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") As defined by [Wikipedia](https://en.wikipedia.org/wiki/Minimum_spanning_tree), a minimum spanning tree (MST) "is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight". In other words, it's the set of connections with the smallest weight that still connects all of the components together. In the case of a plex's spatial constraints, the "weights" for our MST search are the determinants of the covariance of the extrinsic. This is a good measure of how well-constrained a component is to its neighbors. ![An illustration of a plex created through MST command](/assets/images/shape_mst-a5c8caa4574c3d3d8a4bbd62392ea839.png) ## Examples[​](#examples "Direct link to Examples") #### Create a minimum spanning tree plex[​](#create-a-minimum-spanning-tree-plex "Direct link to Create a minimum spanning tree plex") > ``` > metrical shape mst $RESULTS $SHAPED_PLEX > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_PLEX][​](#input_plex "Direct link to \[INPUT_PLEX]") > The path to the input plex. This can be a MetriCal Results MCAP or a plex JSON. #### \[SHAPED\_PLEX][​](#shaped_plex "Direct link to \[SHAPED_PLEX]") > The output file path for this shaped plex. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). --- # Shape Mode ## Usage[​](#usage "Direct link to Usage") Shape Overview - CLI Example ``` metrical shape [COMMAND_OPTIONS] ``` ## Purpose[​](#purpose "Direct link to Purpose") Plexes can become incredibly complicated and difficult to parse depending on the complexity of the system you're calibrating. This is where the Shape command comes in handy. Shape modifies a plex into a variety of different and useful configurations. It was created with an eye towards the practical use of calibration data in a deployed system. Some Shape subcommands, like `mst` and `focus`, rely on the [covariance of each spatial constraint](/metrical/core_concepts/constraints.md#spatial-covariance) in the plex to inform the operation. Since the covariance is a measure of uncertainty, we can use it to carve out the "most certain" constraints between two components. ![Covariances of the plex\'s spatial constraints](/assets/images/shape_covariance-ee2d1c8fad41e41cd8fdf8d282c7a0f7.png) Other Shape commands will mutate the plex into something useful for another application, such as the `urdf` command for ROS applications. | Commands | Purpose | | ------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------- | | [`focus`](/metrical/commands/calibration/shape/shape_focus.md) | Create a plex in which all components are spatially connected to only one "focus" component, i.e. a "hub-and-spoke" plex. | | [`lut`](/metrical/commands/calibration/shape/shape_lut.md) | Create a pixel-wise lookup table for a single camera. | | [`stereo-lut`](/metrical/commands/calibration/shape/shape_lut_stereo.md) | Create two pixel-wise lookup tables that produce a rectified stereo pair. | | [`mst`](/metrical/commands/calibration/shape/shape_mst.md) | Create a plex from the Minimum Spanning Tree of all spatial constraints. | | [`tabular`](/metrical/commands/calibration/shape/shape_tabular.md) | Re-encodes the calibration data in a plex as a series of compressed tables. | | [`urdf`](/metrical/commands/calibration/shape/shape_urdf.md) | Create a ROS-compatible URDF from this plex. | --- # Tabular ## Usage[​](#usage "Direct link to Usage") Tabular - CLI Example ``` metrical shape tabular [OPTIONS] ``` Tabular - Manifest Example ``` command = "shape-tabular" input-plex = "{{calibrate-stage.results}}" format = "json|msgpack" filter-component = ["component_name_or_uuid_one", "component_name_or_uuid_two", ...] artifact = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") The tabular shape subcommand writes a subset of the data in a Plex in a tabular format, which contains: 1. The intrinsic information of each component in the source Plex (without covariance). 2. The extrinsic information of each connected component pairing in the source Plex (without covariance). At present, the intrinsic information for camera components also includes the serialized lookup tables (LUTs) similar to what is output by the [LUT](/metrical/commands/calibration/shape/shape_lut.md) mode, but instead for every camera in the source Plex, rather than for just a single camera. The purpose of the Tabular subcommand is to provide a simplified description of the calibration and related artifacts (such as the LUTs) that is more suitable for flashing onto devices that may not need the full system description. Generally speaking, the output JSON (or MessagePack binary) will have fields akin to the following: ``` { "first_component_name": { "raw_intrinsics": { // intrinsics model as encoded in the plex }, "lut": { "width": 800, "height": 700, "remapping_columns": [ /* an 800×700 LUT in row-major order */ ], "remapping_rows": [ /* an 800×700 LUT in row-major order */ ] } }, "second_component_name": { "raw_intrinsics": { // intrinsics model as encoded in the plex }, "lut": { "width": 800, "height": 700, "remapping_columns": [ /* an 800×700 LUT in row-major order */ ], "remapping_rows": [ /* an 800×700 LUT in row-major order */ ] } }, "extrinsics_table": [ { "extrinsics": { "rotation": [0, 0, 0, 1], "translation": [3.0, 4.0, 5.0] }, "from": "first_component_name", "to": "second_component_name" }, { "extrinsics": { "rotation": [0, 0, 0, 1], "translation": [-3.0, -4.0, -5.0] }, "from": "second_component_name", "to": "first_component_name" } ] } ``` Aside from the special "extrinsics\_table" member which specifies the list of extrinsics pairs in the plex, each component is treated as a named member of the broader "tabular" format object. ## Examples[​](#examples "Direct link to Examples") #### Create a tabular calibration in the current directory[​](#create-a-tabular-calibration-in-the-current-directory "Direct link to Create a tabular calibration in the current directory") > ``` > metrical shape tabular input_data.results.mcap . > # To view it > jq -C . input_data.results.tabular.json | less > ``` #### Create a tabular calibration in MsgPack format in the `/calibrations` directory[​](#create-a-tabular-calibration-in-msgpack-format-in-the-calibrations-directory "Direct link to create-a-tabular-calibration-in-msgpack-format-in-the-calibrations-directory") > ``` > metrical shape tabular -f msgpack input_data.results.mcap /calibrations > ``` #### Create a tabular calibration that only includes `/camera*` topics[​](#create-a-tabular-calibration-that-only-includes-camera-topics "Direct link to create-a-tabular-calibration-that-only-includes-camera-topics") > ``` > metrical shape tabular --filter-component "/camera*" input_data.results.mcap . > # To view it > jq -C . input_data.results.tabular.json | less > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_PLEX][​](#input_plex "Direct link to \[INPUT_PLEX]") > The path to the input plex. This can be a MetriCal Results MCAP or a plex JSON. #### \[ARTIFACT][​](#artifact "Direct link to \[ARTIFACT]") > The artifacts output location. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### -f, --format \[FORMAT][​](#-f---format-format "Direct link to -f, --format \[FORMAT]") > **Default: `json`** > > What serializer to use to format the output. Possible values: > > * json: Output the tabular format as JSON > * msgpack: Output the tabular format as MsgPack #### --filter-component \[FILTER\_COMPONENT][​](#--filter-component-filter_component "Direct link to --filter-component \[FILTER_COMPONENT]") > A set of filters for which components to include. Every filter is a component specifier (name or UUID) applied as if it is given an "OR" relationship with other filters. --- # URDF ## Usage[​](#usage "Direct link to Usage") URDF - CLI Example ``` metrical shape urdf [OPTIONS] --root-component ``` URDF - Manifest Example ``` command = "shape-urdf" input-plex = "{{calibrate-stage.results}}" use-optical-frame = true|false root-component = "component_name_or_uuid" shaped-plex = "{{auto}}" ``` ## Purpose[​](#purpose "Direct link to Purpose") Create a URDF from a plex. This URDF is compatible with common ROS operations. ## Examples[​](#examples "Direct link to Examples") #### Create a URDF from a plex[​](#create-a-urdf-from-a-plex "Direct link to Create a URDF from a plex") > ``` > metrical shape urdf --root-component ir_one $INPUT_PLEX $SHAPED_PLEX > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_PLEX][​](#input_plex "Direct link to \[INPUT_PLEX]") > The path to the input plex. This can be a MetriCal results MCAP or a plex JSON. #### \[SHAPED\_PLEX][​](#shaped_plex "Direct link to \[SHAPED_PLEX]") > The output file path for this shaped plex. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### --use-optical-frame[​](#--use-optical-frame "Direct link to --use-optical-frame") > The `use_optical_frame` boolean determines whether or not to create a link for the camera frame in the URDF representation of the Plex. If `use_optical_frame` is `true` and the component is a camera, a new link for the camera frame is created and added to the URDF. This link represents the optical frame of the camera in the robot model: > > base -> camera (FLU) -> camera\_optical (RDF) > > If `use_optical_frame` is `false`, no such link is created. This might be the case if the optical frame is not needed in the URDF representation, or if it's being handled in some other way: > > base -> camera (FLU) #### -a, --root-component \[ROOT\_COMPONENT][​](#-a---root-component-root_component "Direct link to -a, --root-component \[ROOT_COMPONENT]") > The root component name or UUID to use for the joint transformations in the URDF. --- # MetriCal Command: Completion ## Usage[​](#usage "Direct link to Usage") Completion - CLI Example ``` metrical completion [OPTIONS] --shell ``` ## Purpose[​](#purpose "Direct link to Purpose") This command is purely for supplementing your MetriCal experience with automatic completions from the terminal. Since it's common to invoke MetriCal using a bash function or alias, use the `--invocation` argument to specify the string with which you'll invoke MetriCal to get the right completions. ### Sourcing the Completion File[​](#sourcing-the-completion-file "Direct link to Sourcing the Completion File") Don't forget to `source` the appropriate completion file before attempting to use completion! For a temporary solution, just run `source`: ``` source ``` Otherwise, save the file in a canonical location to automatically load it during any shell session. Some locations are listed below; note that this list is not exhaustive, and may change depending on your configuration. | Shell | Location (example for `metrical` alias) | | ---------- | ------------------------------------------------------ | | bash | `/usr/share/bash-completion/completions/metrical.bash` | | zsh | `~/.zsh/_metrical` | | fish | `~/.config/fish/completions/metrical.fish` | | elvish | `~/.elvish/metrical.elv` | | powershell | execute `_metrical.ps1` script in PowerShell | ## Examples[​](#examples "Direct link to Examples") #### Generate Completions for Bash[​](#generate-completions-for-bash "Direct link to Generate Completions for Bash") > ``` > metrical completion -i metrical -s bash > $OUTPUT > ``` > > ...where `$OUTPUT` is the file to pipe the output into. ## Options[​](#options "Direct link to Options") #### -i, --invocation \[INVOCATION][​](#-i---invocation-invocation "Direct link to -i, --invocation \[INVOCATION]") > The binary name being invoked, e.g. `metrical`. The default value is the 0th argument of the current invocation. This is useful if you're using an alias or function to invoke MetriCal. #### -s, --shell \[SHELL][​](#-s---shell-shell "Direct link to -s, --shell \[SHELL]") > The shell variant to generate. Valid values: > > * `bash`: [bash](https://www.gnu.org/software/bash/manual/bash.html) completion file. > * `zsh`: [zsh](https://www.zsh.org/) completion file. > * `fish`: [Fish](https://fishshell.com/docs/current/index.html) completion file. > * `elvish`: [Elvish](https://elv.sh/ref/) completion file. > * `powershell`: [Powershell](https://learn.microsoft.com/en-us/powershell/) completion file. --- # MetriCal Command: Display Rerun Required This is the only command that expects an active Rerun server. Make sure to have Rerun running before using this command. Read about configuring Rerun [here](/metrical/configuration/visualization.md). ## Usage[​](#usage "Direct link to Usage") Display - CLI Example ``` metrical display [OPTIONS] $INPUT_DATA_PATH -p $PLEX_OR_RESULTS_PATH ``` ## Purpose[​](#purpose "Direct link to Purpose") MetriCal's Display command renders any data that it's given in Rerun. If a calibrated plex is passed in with `-p` (either from a calibration results MCAP or a plex JSON), MetriCal will apply the calibration to the data before rendering. This effectively acts as a gut-check validation of the calibration quality: if the calibration is good, then the rendered data should look well-aligned and consistent. That being said, there's no reason you can't apply a calibration to a test dataset, i.e. one that wasn't used to derive a calibration. Just make sure that the dataset shares the same topics as the plex. If this is not the case, you can use the [`--topic-to-component`](#-m---topic-to-component-topic-to-component) flag to map topics to components. Watch the video below for a quick primer on Display command: ## Examples[​](#examples "Direct link to Examples") #### Start a Display session immediately after a calibration run[​](#start-a-display-session-immediately-after-a-calibration-run "Direct link to Start a Display session immediately after a calibration run") > ``` > metrical calibrate -o $RESULTS $DATA $PLEX $OBJ_SPC > metrical display $DATA -p $RESULTS > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[INPUT\_DATA\_PATH][​](#input_data_path "Direct link to \[INPUT_DATA_PATH]") > The dataset with which to calibrate, or the detections from an earlier run. Users can pass MCAP files (usually with an `.mcap` extension) or a top-level directory containing a set of nested directories for each topic. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### -p \[PLEX\_OR\_RESULTS\_PATH][​](#-p-plex_or_results_path "Direct link to -p \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results MCAP or a plex JSON. #### -m, --topic-to-component \[TOPIC-TO-COMPONENT][​](#-m---topic-to-component-topic-to-component "Direct link to -m, --topic-to-component \[TOPIC-TO-COMPONENT]") > A mapping of ROS topic/folder names to component names/UUIDs in the input plex. > MetriCal only parses data that has a topic-component mapping. Ideally, topics and components share the same name. However, if this is not the case, use this flag to map topic names from the dataset to component names in the plex. #### --render-socket \[RENDER\_SOCKET][​](#--render-socket-render_socket "Direct link to --render-socket \[RENDER_SOCKET]") > The web socket address on which Rerun is listening. This should be an IP address and port number separated by a colon, e.g. `--render-socket="127.0.0.1:3030"`. By default, Rerun will listen on socket `host.docker.internal:9876`. If running locally (not via Docker), Rerun's default port is `127.0.0.1:9876` > > When running Rerun from its CLI, the IP would correspond to its `--bind` option and the port would correspond to its `--port` option. --- # MetriCal Command: Report ## Usage[​](#usage "Direct link to Usage") Report - CLI Example ``` metrical report [OPTIONS] ``` ## Purpose[​](#purpose "Direct link to Purpose") The Report command is primarily used to re-generate the [full report](/metrical/results/report.md) from a [calibration output](/metrical/results/output_file.md). It will also print any provided plex file in a human-readable format. If an `origin` and `secondary` component are provided, then MetriCal will extract the relevant component and constraint data between those two items. `Origin` will act as the origin for any constraints. This is useful for a gut-check of a calibration parameter or two. If you need to reformat a plex into something more convenient for your application, use [Shape command](/metrical/commands/calibration/shape/shape_overview.md) instead. ## Examples[​](#examples "Direct link to Examples") #### Print a plex in a human-readable format[​](#print-a-plex-in-a-human-readable-format "Direct link to Print a plex in a human-readable format") > ``` > metrical report $PLEX > ``` #### Generate a report using the output file from a calibration[​](#generate-a-report-using-the-output-file-from-a-calibration "Direct link to Generate a report using the output file from a calibration") > ``` > metrical calibrate -o $RESULTS $DATA $INIT_PLEX $OBJ > metrical report $RESULTS > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[PLEX\_OR\_RESULTS\_PATH][​](#plex_or_results_path "Direct link to \[PLEX_OR_RESULTS_PATH]") > The path to the input plex. This can be a MetriCal results JSON or a plex JSON. ## Options[​](#options "Direct link to Options") #### Universal Options[​](#universal-options "Direct link to Universal Options") As with every command, all [universal options](/metrical/commands/global_arguments.md) are supported (though not all may be used). --- # Errors & Troubleshooting MetriCal logs various error codes during operation or upon exiting. The documentation here catalogs these errors, along with descriptions (🔍) and troubleshooting (🔧) steps. While there are a large number of errors that can occur for various reasons, we do our best to keep this aligned with the current collection of error codes and causes that we find in our products. If we've missed something here, or if you need additional clarification, please contact us at . ## Error Codes in Logs[​](#error-codes-in-logs "Direct link to Error Codes in Logs") During operation, error codes may be printed to the log (stderr) as informational warnings or as errors that caused MetriCal to fail and exit. These are usually coded as `cal-calibrate-001` or `license-002` or something similar. In addition to these codes, there are usually anchors or sections in the docs that relate to each individual code. ## MetriCal Exit Codes[​](#metrical-exit-codes "Direct link to MetriCal Exit Codes") When exiting, MetriCal will print the exit code and return it from the process. The table below lists the exit codes and a description of what each code means. | Exit Code | Error Type | Notes | | --------- | ---------------------------- | -------------------------------------------------------------------------------------------------------------- | | 0 | Success, No Errors | Good Job! | | 1 | IO | Filesystem and Input/Output related errors | | 2 | Cli | Command Line Interface related errors | | 3 | ShapeMode | [Shape](/metrical/commands/calibration/shape/shape_overview.md) related errors | | 4 | InitMode | [Init](/metrical/commands/calibration/init.md) related errors | | 5 | ~~Pretty Print~~ | Historical exit code, unused | | 6 | MetriCalLicense | License related errors | | 7 | ~~Display mode~~ | Historical exit code, unused | | 8 | CalibrateMode | [Calibrate](/metrical/commands/calibration/calibrate.md) related errors | | 9 | ~~Completion mode~~ | Historical exit code, unused | | 10 | Rendering | Rendering related errors | | 11 | ~~Consolidate object space~~ | Historical exit code, unused | | 12 | Diagnostic | An error-level [diagnostic](/metrical/results/output_file.md) was thrown | | 13 | Format | Errors whose source can be traced back to some issue based on trying to read or write the expected file format | | 14 | Manifest | Errors related to processing [MetriCal Manifest files](/metrical/commands/manifest/manifest_overview.md) | | 255 | Other | Other unspecified errors (u8::MAX = 255) | ## Error Sub-Codes[​](#error-sub-codes "Direct link to Error Sub-Codes") Many codes have sub-codes that describe the specific error that occurred. Find a list of relevant sub-codes below. ### Code 1: IO Errors[​](#code-1-io-errors "Direct link to Code 1: IO Errors") #### Filesystem Error (cal-io-001)[​](#cal-io-001 "Direct link to Filesystem Error (cal-io-001)") > 🔍 Error when performing an operation on a file on the filesystem. > > 🔧 Check that the file exists, you have the necessary permissions to read/write it, and that the file path is correct. Verify that there's sufficient disk space if writing files. Also ensure the file isn't locked by another process. #### Memory Map Failure (cal-io-002)[​](#cal-io-002 "Direct link to Memory Map Failure (cal-io-002)") > 🔍 Failed to memory map the provided file. > > 🔧 This error occurs when the system cannot create a memory-mapped view of the file. Check that: > > * The file exists and is accessible > * You have sufficient memory available > * The file isn't corrupted or locked by another process > * Your system supports memory mapping for files of this size #### Cached Detections for Wrong Plex (cal-io-003)[​](#cal-io-003 "Direct link to Cached Detections for Wrong Plex (cal-io-003)") > 🔍 The cached detections read in for this dataset did not match the component UUIDs in this plex. > > 🔧 The cached detection data was generated for a different plex configuration. Clear your cache and regenerate the detections, or ensure you're using the correct plex file that matches the cached detections. ### Code 2: CLI Errors[​](#code-2-cli-errors "Direct link to Code 2: CLI Errors") #### Missing Component (cal-cli-001)[​](#cal-cli-001 "Direct link to Missing Component (cal-cli-001)") > 🔍 There is no component with the specified UUID or name in the provided plex. > > 🔧 Using the `init` command to create a plex ensures that all components are properly named and assigned a UUID according to the dataset being processed. #### Duplicate Component Specification (cal-cli-002)[​](#cal-cli-002 "Direct link to Duplicate Component Specification (cal-cli-002)") > 🔍 The same component has been specified more than once. > > 🔧 Comparing a component to itself just gives identity. This is good news: The rules of the universe are still intact. #### Ambiguous Topic Mapping (cal-cli-003)[​](#cal-cli-003 "Direct link to Ambiguous Topic Mapping (cal-cli-003)") > 🔍 There are ambiguous topic-to-component mappings. > > 🔧 If you're using a glob matching pattern (i.e. `-m *image*:___`), make sure that there aren't glob patterns between assignments that could be interpreted as a duplicate. #### Invalid Key-Value Pair (cal-cli-004)[​](#cal-cli-004 "Direct link to Invalid Key-Value Pair (cal-cli-004)") > 🔍 Invalid key:value pair provided as an argument. No ':' found in the argument. > > 🔧 Format this argument as `key:value`. #### Invalid Model (cal-cli-005)[​](#cal-cli-005 "Direct link to Invalid Model (cal-cli-005)") > 🔍 The provided model is not recognized by MetriCal. > > 🔧 There are different models depending on the component type. Check the documentation for valid model names for your component type. #### Component Model Already Assigned (cal-cli-006)[​](#cal-cli-006 "Direct link to Component Model Already Assigned (cal-cli-006)") > 🔍 Cannot assign component name to topic; a component already exists in the plex with this name. > > 🔧 Choose a different component name or check if the component is already properly configured in your plex. #### Output Would Be Overwritten (cal-cli-007)[​](#cal-cli-007 "Direct link to Output Would Be Overwritten (cal-cli-007)") > 🔍 Previous file already exists at the specified path. > > 🔧 Deleting the file at the path above can remove this error; however, this may result in data loss. If this file is a plex or object space, this can cause some associations to become lost, like component relations in cached detections. > > Reference the --overwrite-strategy and --uuid-strategy arguments for more details on how to handle this behavior. #### No Plex in MCAP (cal-cli-008)[​](#cal-cli-008 "Direct link to No Plex in MCAP (cal-cli-008)") > 🔍 The MCAP at the specified path did not contain a plex. > > 🔧 Make sure that the provided MCAP file is a results MCAP from MetriCal. #### No Object Space in MCAP (cal-cli-009)[​](#cal-cli-009 "Direct link to No Object Space in MCAP (cal-cli-009)") > 🔍 The MCAP at the specified path did not contain an object-space. > > 🔧 Make sure that the provided MCAP file is a results MCAP from MetriCal. #### Missing Camera Motion Threshold (cal-cli-010)[​](#cal-cli-010 "Direct link to Missing Camera Motion Threshold (cal-cli-010)") > 🔍 The provided plex contains camera components, but no camera motion filter was specified. > > 🔧 Specify a camera motion filter using the `--camera-motion-threshold` argument. #### Missing Lidar Motion Threshold (cal-cli-011)[​](#cal-cli-011 "Direct link to Missing Lidar Motion Threshold (cal-cli-011)") > 🔍 The provided plex contains lidar components, but no lidar motion filter was specified. > > 🔧 Specify a lidar motion filter using the `--lidar-motion-threshold` argument. #### Variable Needs Substitution (cal-cli-012)[​](#cal-cli-012 "Direct link to Variable Needs Substitution (cal-cli-012)") > 🔍 Variable needs substitution but is missing. > > 🔧 The variable is referenced but not defined. Define it in the manifest or pass it via command line. #### Invalid Output Reference (cal-cli-013)[​](#cal-cli-013 "Direct link to Invalid Output Reference (cal-cli-013)") > 🔍 Invalid output reference that cannot be parsed. > > 🔧 Output references must be a file path or a blank string (""), which indicates the default filepath for that output in your workspace. Outputs as variable values are not permitted. Check your manifest for typos. ### Code 3: Shape Errors[​](#code-3-shape-errors "Direct link to Code 3: Shape Errors") #### No Valid Tabular Components (cal-shape-001)[​](#cal-shape-001 "Direct link to No Valid Tabular Components (cal-shape-001)") > 🔍 None of the components passed to the Shape::tabular command exist, so there was nothing to do. > > 🔧 Verify that the component names or UUIDs you specified exist in your plex. Check for typos in component names and ensure that the components are in your plex configuration. #### Component Cannot Make LUT (cal-shape-002)[​](#cal-shape-002 "Direct link to Component Cannot Make LUT (cal-shape-002)") > 🔍 The specified component is not a camera, so MetriCal can't produce LUTs. > > 🔧 Look-up tables (LUTs) can only be generated for camera components. Ensure that you're specifying a camera component for LUT generation. Check your plex to verify the component types. #### Bad Stereo Configuration (cal-shape-003)[​](#cal-shape-003 "Direct link to Bad Stereo Configuration (cal-shape-003)") > 🔍 The two cameras can't be a stereo pair, since there is no spatial constraint between them in the provided plex. > > 🔧 You can reference a component either by its name or UUID. Ensure that there is a valid spatial relationship (transformation) defined between the two camera components in your plex for stereo pair configuration. #### Missing Root Component (cal-shape-004)[​](#cal-shape-004 "Direct link to Missing Root Component (cal-shape-004)") > 🔍 The root component does not exist. > > 🔧 You may specify the root component by either its name or UUID. Verify that the component exists in your plex and that the name or UUID is spelled correctly. #### URDF Write Failure (cal-shape-005)[​](#cal-shape-005 "Direct link to URDF Write Failure (cal-shape-005)") > 🔍 The URDF could not be converted to a String. #### Builder Error (cal-shape-006)[​](#cal-shape-006 "Direct link to Builder Error (cal-shape-006)") > 🔍 Failed to construct a plex from this URDF. > > 🔧 This error occurs during plex construction from URDF data. Verify that your URDF file is properly formatted, contains valid robot description data, and that all referenced components and links are correctly defined. Check for missing joints, invalid transformations, or malformed XML structure. ### Code 4: Init Errors[​](#code-4-init-errors "Direct link to Code 4: Init Errors") #### Could Not Create Camera (cal-init-001)[​](#cal-init-001 "Direct link to Could Not Create Camera (cal-init-001)") > 🔍 Could not create a camera from the provided data for the specified component. > > 🔧 Verify that the data for this component contains valid image observations. #### Could Not Create Lidar (cal-init-002)[​](#cal-init-002 "Direct link to Could Not Create Lidar (cal-init-002)") > 🔍 Could not create a lidar from the provided data for the specified component. > > 🔧 Ensure that the data for this component contains valid lidar observations. #### Could Not Create IMU (cal-init-003)[​](#cal-init-003 "Direct link to Could Not Create IMU (cal-init-003)") > 🔍 Could not create an IMU from the provided data for the specified component. > > 🔧 Check that the data for this component contains valid IMU observations. #### Impossible Image Resolution (cal-init-004)[​](#cal-init-004 "Direct link to Impossible Image Resolution (cal-init-004)") > 🔍 Last read image for topic has a width or height of 0 px. > > 🔧 Check your image data to ensure that images have valid, non-zero dimensions. This may indicate corrupted image data or an issue with the image encoding. #### Image Resolution Inconsistent (cal-init-005)[​](#cal-init-005 "Direct link to Image Resolution Inconsistent (cal-init-005)") > 🔍 Last read image for topic has a resolution inconsistent with previous images. > > 🔧 Ensure that all images in a topic have the same resolution. MetriCal requires consistent image dimensions throughout the stream for proper camera calibration. #### No Topics Found (cal-init-006)[​](#cal-init-006 "Direct link to No Topics Found (cal-init-006)") > 🔍 None of the requested topics were found or readable in this dataset. > > 🔧 Check that the topics you specified exist in your dataset and are in a format that MetriCal can read. #### No Data to Init (cal-init-007)[​](#cal-init-007 "Direct link to No Data to Init (cal-init-007)") > 🔍 MetriCal was not able to create a Plex from the init data. > > 🔧 This error could be for a few reasons: > > * None of the topics requested exist in the dataset provided. > * The dataset provided is empty. > * The requested topics in the dataset provided are not in a format that MetriCal can read. > > If any of these apply, please check the dataset and try again. #### Could Not Create Radar (cal-init-008)[​](#cal-init-008 "Direct link to Could Not Create Radar (cal-init-008)") > 🔍 Could not create a radar from the provided data for the specified component. > > 🔧 Verify that the data for this component contains valid radar observations. #### Could Not Create Local Navigation System (cal-init-009)[​](#cal-init-009 "Direct link to Could Not Create Local Navigation System (cal-init-009)") > 🔍 Could not create a local navigation system from the provided data for the specified component. > > 🔧 Ensure that the data for this component contains valid local navigation system observations. #### Could Not Create Transform Tree (cal-init-010)[​](#cal-init-010 "Direct link to Could Not Create Transform Tree (cal-init-010)") > 🔍 Could not create a transform tree from the provided data for the specified component. > > 🔧 Check that the data for this component contains valid transform tree observations. ### Code 6: License Errors[​](#code-6-license-errors "Direct link to Code 6: License Errors") #### Response Verification Failed (license-001)[​](#license-001 "Direct link to Response Verification Failed (license-001)") > 🔍 Verifying the license via API failed. > > 🔧 Ensure no network proxy is modifying HTTP content or headers. This can happen if the response's signature could not be verified, the response could not be properly decoded, or the response did not contain a necessary field. Contact if the issue persists. #### License File Verification Failed (license-002)[​](#license-002 "Direct link to License File Verification Failed (license-002)") > 🔍 Verifying the license-cache file failed. > > 🔧 Ensure a valid license-cache file exists at the specified path and has not been edited or corrupted. This error occurs when there's malformed JSON, missing/malformed JSON fields, or signature verification fails. #### Could Not Read License File (license-004)[​](#license-004 "Direct link to Could Not Read License File (license-004)") > 🔍 A license-cache file could not be read from disk. > > 🔧 Ensure a license-cache file exists at the specified path and is readable. This is usually the result of a permissions error or the file does not exist. #### Could Not Deserialize License File (license-005)[​](#license-005 "Direct link to Could Not Deserialize License File (license-005)") > 🔍 The license-cache file could not be parsed. > > 🔧 Ensure the license-cache file has not been edited or corrupted. This likely means the file was tampered with or the file was written by an old version of MetriCal without backwards-compatibility. #### No Default License File Path (license-006)[​](#license-006 "Direct link to No Default License File Path (license-006)") > 🔍 No default application config path could be found in order to locate or write the license-cache file. > > 🔧 Please contact . Offline operation is not possible without a license-cache file. This occurs when $HOME is not set or the user doesn't have an entry in /etc/passwd. #### Could Not Write License File (license-007)[​](#license-007 "Direct link to Could Not Write License File (license-007)") > 🔍 The license-cache file could not be written to disk. > > 🔧 Offline operation is not possible without a license-cache file. Ensure the directory exists and that the program has write permissions. This is usually the result of a permissions error and may cause this machine to be counted more than once for billing/licensing purposes. #### Could Not Serialize License File (license-008)[​](#license-008 "Direct link to Could Not Serialize License File (license-008)") > 🔍 The license-cache file could not be serialized to disk. > > 🔧 Offline operation is not possible without a license-cache file. Please contact us at if issue persists. This should never happen unless there are bugs in the serialization code. #### Cached License Expired (license-009)[​](#license-009 "Direct link to Cached License Expired (license-009)") > 🔍 The license-cache file is expired. > > 🔧 Run MetriCal with internet access to refresh the license. #### Cached License Key Mismatch (license-011)[​](#license-011 "Direct link to Cached License Key Mismatch (license-011)") > 🔍 The provided license key does not match the key in the license-cache file. > > 🔧 If you maintain different license-cache files (e.g. for different users), ensure you're using the license-cache file that matches the provided license key. Otherwise, delete your license-cache file and try again. #### Config Missing License (license-012)[​](#license-012 "Direct link to Config Missing License (license-012)") > 🔍 Failed to find license key in TOML config file. > > 🔧 No item named "license" found; possibly a typo in the config file? #### Config Malformed License (license-013)[​](#license-013 "Direct link to Config Malformed License (license-013)") > 🔍 License key in TOML config file is invalid. > > 🔧 License key must be a string with non-zero length. #### Could Not Read Config File (license-014)[​](#license-014 "Direct link to Could Not Read Config File (license-014)") > 🔍 Failed to read TOML config file. > > 🔧 Ensure the file exists and is readable, or provide an alternate license key source (CLI argument, environment variable). #### License Key Not Found (license-015)[​](#license-015 "Direct link to License Key Not Found (license-015)") > 🔍 No license key was found from any source. > > 🔧 Provide a license key via CLI argument, environment variable, or config file. #### No Default Config File Path (license-016)[​](#license-016 "Direct link to No Default Config File Path (license-016)") > 🔍 No default application config path could be found in order to locate the config file. > > 🔧 Please contact . This occurs when $HOME is not set or the user doesn't have an entry in /etc/passwd. #### Config Malformed TOML (license-017)[​](#license-017 "Direct link to Config Malformed TOML (license-017)") > 🔍 Config file is not valid TOML. > > 🔧 See documentation for an example of valid TOML. Check the TOML syntax and formatting in your config file. #### License API Error (license-018)[​](#license-018 "Direct link to License API Error (license-018)") > 🔍 An error occurred while querying the Hub API for license information. > > 🔧 Please review the included error message for more information. This could be due to network connectivity issues, API service unavailability, or authentication problems. #### Insufficient Credits (license-019)[​](#license-019 "Direct link to Insufficient Credits (license-019)") > 🔍 Your license has insufficient credits to calibrate the provided plex. > > 🔧 Purchase more credits at . Check your license's credit balance and the expected calibration cost to determine how many additional credits you need. ### Code 8: Calibrate Errors[​](#code-8-calibrate-errors "Direct link to Code 8: Calibrate Errors") #### No Features Detected (cal-calibrate-001)[​](#cal-calibrate-001 "Direct link to No Features Detected (cal-calibrate-001)") > 🔍 No features were detected from any specified object space, or the feature count per observation was too low for use. > > 🔧 This indicates an issue in your object space. There are a number of things that could have happened. Here are just a few suggestions: > > * Verify the measurements of your fiducials. These should be in meters. > * Verify the dictionary used for your boards, if applicable. For example, a 4x4 dictionary has targets made up of 4x4 squares in the inner pattern of the fiducial. > * If this is a Markerboard, make sure that you have checked that your `initial_corner` is set to the correct variant ('Marker' or 'Square'). > * Make sure you can see as much of the board as possible when collecting your data. Tangram filters detections that are all collinear, or detections which have fewer than 8 identified points in the image. > * Try adjusting your Camera or LiDAR filter settings to ensure that all detections are not filtered out. > * Running with cached detections? Your object space may have changed since the cached detections were generated. Try clearing your cache and re-running. #### Detector Error (cal-calibrate-002)[​](#cal-calibrate-002 "Direct link to Detector Error (cal-calibrate-002)") > 🔍 There was an error when working with the detector. > > 🔧 Check your detector configuration and ensure that your input data is compatible with the selected detector. #### Calibration Failed to Complete (cal-calibrate-003)[​](#cal-calibrate-003 "Direct link to Calibration Failed to Complete (cal-calibrate-003)") > 🔍 Failed to run calibration to completion. > > 🔧 Review your calibration parameters and input data. This error typically indicates issues with the least squares solver during optimization. #### Calibration Solution Interpretation Failed (cal-calibrate-004)[​](#cal-calibrate-004 "Direct link to Calibration Solution Interpretation Failed (cal-calibrate-004)") > 🔍 Failed to interpret the calibration solution. > > 🔧 The calibration completed but the results could not be properly interpreted. Check your calibration configuration and data quality. #### No Compatible Component Type (cal-calibrate-005)[​](#cal-calibrate-005 "Direct link to No Compatible Component Type (cal-calibrate-005)") > 🔍 There is no compatible component type to register against, so the calibration can't proceed. > > 🔧 Different component types rely on different modalities to support their calibration: > > * Cameras: no dependency > * Lidar: requires another lidar or a camera > * IMU: requires a camera > * Local Navigation System (LNS): requires a camera > > If processing any of these modalities, make sure these conditions are met before proceeding. Alternatively, remove this component from the calibration process. #### Compatible Component Has No Detections (cal-calibrate-006)[​](#cal-calibrate-006 "Direct link to Compatible Component Has No Detections (cal-calibrate-006)") > 🔍 This data had a compatible component type to register against, but that component type had no detections. > > 🔧 Different component types rely on different modalities to support their calibration: > > * Cameras: no dependency > * Lidar: requires another lidar or a camera > * IMU: requires a camera > * Local Navigation System (LNS): requires a camera > > One or more components could be paired with a compatible component type, but that other component type didn't have any usable detections! Gather more detections to calibrate this component. #### Compatible Component Detections Filtered (cal-calibrate-007)[​](#cal-calibrate-007 "Direct link to Compatible Component Detections Filtered (cal-calibrate-007)") > 🔍 This data had a compatible component type to register against, but all observations from that component type were filtered out from motion. > > 🔧 Different component types rely on different modalities to support their calibration: > > * Cameras: no dependency > * Lidar: requires another lidar or a camera > * IMU: requires a camera > * Local Navigation System (LNS): requires a camera > > One or more components could be paired with a compatible component type, but that other component type had all of its detections filtered out by the motion filter! We recommend taking another dataset, this time with pauses during motion. That, or raise the motion filter threshold. #### Camera Pose Estimate Failed (cal-calibrate-008)[​](#cal-calibrate-008 "Direct link to Camera Pose Estimate Failed (cal-calibrate-008)") > 🔍 The initial camera pose optimization failed to converge. > > 🔧 According to the optimization, there were camera detections for which a sane camera pose could not be derived. This may indicate an error in image readout, or an egregiously poor intrinsics initialization by MetriCal. Try two things: > > * Increase this camera's focal length in the init plex. This might just do the trick. > * Review this camera's data for anything unusual. Any corrupt images or bad detections could be causing this issue. #### License Server Contact Failed (cal-calibrate-009)[​](#cal-calibrate-009 "Direct link to License Server Contact Failed (cal-calibrate-009)") > 🔍 MetriCal was unable to report a successful calibration to the licensing server. > > 🔧 Because your license tier has metered calibrations, MetriCal needs to contact the license server to register each successful calibration. Even though your calibration succeeded, we were unable to reach the license server at this time and therefore cannot output calibration results. #### IMU Motion Profile Issues (cal-calibrate-010)[​](#cal-calibrate-010 "Direct link to IMU Motion Profile Issues (cal-calibrate-010)") > 🔍 Motion profile issues detected for IMU. > > 🔧 The IMU motion profile analysis detected issues with the excitation of this IMU. Please review the detected issues and consider re-collecting data with improved motion characteristics. > > If you would like to process the rest of this calibration, re-init the plex without this component and rerun the calibration. #### LNS Motion Profile Issues (cal-calibrate-011)[​](#cal-calibrate-011 "Direct link to LNS Motion Profile Issues (cal-calibrate-011)") > 🔍 Motion profile issues detected for LNS. > > 🔧 The LNS motion profile analysis detected issues with the trajectory of this LNS. Please review the detected issues and consider re-collecting data with improved motion characteristics. > > If you would like to process the rest of this calibration, re-init the plex without this component and rerun the calibration. ### Code 10: Rendering Errors[​](#code-10-rendering-errors "Direct link to Code 10: Rendering Errors") #### Missing Recording Stream (cal-rendering-001)[​](#cal-rendering-001 "Direct link to Missing Recording Stream (cal-rendering-001)") > 🔍 No global Rerun connection found. > > 🔧 Ensure that a global Rerun connection is established. The rendering system requires an active Rerun connection to visualize data. #### Observation Incompatible (cal-rendering-002)[​](#cal-rendering-002 "Direct link to Observation Incompatible (cal-rendering-002)") > 🔍 The observation wasn't compatible with the render function called. > > 🔧 Check that you're using the correct rendering function for your observation type. Each render function is designed for specific observation types - make sure the observation matches the intended type for the function you're calling. #### Empty Object Space (cal-rendering-003)[​](#cal-rendering-003 "Direct link to Empty Object Space (cal-rendering-003)") > 🔍 This object space is empty of renderable objects. > > 🔧 Ensure that your object space contains valid renderable objects before attempting to render it. Check that the object space has been properly populated with geometric or visual elements. #### Missing Component (cal-rendering-004)[​](#cal-rendering-004 "Direct link to Missing Component (cal-rendering-004)") > 🔍 There is no component with the specified name in the plex. > > 🔧 Verify that the component name exists in your plex configuration. Check for typos in the component name and ensure that the component has been properly defined and initialized in the plex. #### Recording Stream Error (cal-rendering-005)[​](#cal-rendering-005 "Direct link to Recording Stream Error (cal-rendering-005)") > 🔍 A Rerun recording stream error occurred. > > 🔧 This is an error from the underlying Rerun library. Check the Rerun connection status, ensure the Rerun viewer is running if needed, and verify that the recording stream configuration is correct. #### Image Conversion Error (cal-rendering-006)[​](#cal-rendering-006 "Direct link to Image Conversion Error (cal-rendering-006)") > 🔍 An image conversion error occurred. > > 🔧 This error occurs when converting image data for rendering. Check that your image data is in a supported format, has valid dimensions, and that the pixel data is not corrupted. Ensure the image encoding matches what's expected. #### Empty Observation (cal-rendering-007)[​](#cal-rendering-007 "Direct link to Empty Observation (cal-rendering-007)") > 🔍 Cannot render an empty observation. > > 🔧 The observation contains no data to render. Ensure that your observation has valid data before attempting to render it. Check that data collection was successful and that the observation is properly populated. #### Object Space Construction Error (cal-rendering-008)[​](#cal-rendering-008 "Direct link to Object Space Construction Error (cal-rendering-008)") > 🔍 An error occurred during object space construction. > > 🔧 This is an error from the underlying object space construction system. Check that your object space definition is valid, all required parameters are provided, and that referenced resources (meshes, textures, etc.) are accessible. ### Code 12: Diagnostic Errors[​](#code-12-diagnostic-errors "Direct link to Code 12: Diagnostic Errors") 🔍 Error-level data diagnostic detected; calibration failed. 🔧 Since an Error-level data diagnostic was thrown, MetriCal will not output a calibration results file by default. If your license plan includes a limited number of calibrations, runs that do not produce a calibration results file will not count toward that limit. This safeguard can be overridden by passing the [`--override-diagnostics`](/metrical/commands/calibration/calibrate.md#--override-diagnostics) flag to the `calibrate` command. This will allow you to get a calibration if you feel your data has been wrongly diagnosed as insufficient. However, this will count toward your calibration limit. ### Code 13: Format Errors[​](#code-13-format-errors "Direct link to Code 13: Format Errors") #### JSON Deserialization Error (cal-format-001)[​](#cal-format-001 "Direct link to JSON Deserialization Error (cal-format-001)") > 🔍 Failed to deserialize data from source JSON. > > 🔧 Check that your JSON file is properly formatted and contains valid syntax. Look for missing commas, brackets, or quotes. The error message will indicate the specific line and column where the parsing failed. #### TOML Deserialization Error (cal-format-002)[​](#cal-format-002 "Direct link to TOML Deserialization Error (cal-format-002)") > 🔍 Failed to deserialize data from source TOML. > > 🔧 Check that your TOML file is properly formatted and follows TOML syntax rules. Verify that keys are properly quoted, values have the correct types, and tables are properly structured. #### JSON Serialization Error (cal-format-003)[​](#cal-format-003 "Direct link to JSON Serialization Error (cal-format-003)") > 🔍 Failed to serialize data to JSON format. > > 🔧 This usually indicates an internal issue with data structures that cannot be represented in JSON. Contact . #### MCAP File Error (cal-format-004)[​](#cal-format-004 "Direct link to MCAP File Error (cal-format-004)") > 🔍 Failed to read something from an MCAP file. > > 🔧 Ensure that the MCAP file is not corrupted and is a valid MCAP format. Check that you have read permissions for the file and that it was created with a compatible version of the MCAP library. #### URDF File Error (cal-format-005)[​](#cal-format-005 "Direct link to URDF File Error (cal-format-005)") > 🔍 Failed to read from a URDF file. > > 🔧 Verify that your URDF file is valid XML and follows the URDF specification. Check for proper tag closure, valid attribute values, and ensure that all referenced files (meshes, textures) exist and are accessible. #### MessagePack Serialization Error (cal-format-006)[​](#cal-format-006 "Direct link to MessagePack Serialization Error (cal-format-006)") > 🔍 Failed to write/serialize a struct to MessagePack format. > > 🔧 This indicates an issue with serializing data to the MessagePack binary format. Check that all data types in your structure are supported by MessagePack and that there are no circular references in your data. ### Code 14: Manifest Errors[​](#code-14-manifest-errors "Direct link to Code 14: Manifest Errors") #### Missing Required Variables (cal-manifest-001)[​](#cal-manifest-001 "Direct link to Missing Required Variables (cal-manifest-001)") > 🔍 Missing/misconfigured variables needed for manifest execution. > > 🔧 Configure the missing and misconfigured variables using `--set VARIABLE=`. Check that all referenced variables are properly defined and that their values are valid for your manifest configuration. #### Manifest Structure Cycle (cal-manifest-002)[​](#cal-manifest-002 "Direct link to Manifest Structure Cycle (cal-manifest-002)") > 🔍 Circular dependency detected in manifest structure. > > 🔧 The manifest has a circular dependency that prevents proper execution order. Remove dependencies that create cycles in your manifest. Check the dependency chain and ensure that stages don't reference each other in a loop. #### Invalid Stage Reference (cal-manifest-003)[​](#cal-manifest-003 "Direct link to Invalid Stage Reference (cal-manifest-003)") > 🔍 Invalid stage reference in the manifest. > > 🔧 Ensure that all stage references point to valid, existing stages in your manifest. Check for typos in stage names and verify that referenced stages are properly defined. #### Forward Stage Reference (cal-manifest-004)[​](#cal-manifest-004 "Direct link to Forward Stage Reference (cal-manifest-004)") > 🔍 Forward stage reference: a stage cannot reference another stage that comes later in execution order. > > 🔧 Stages execute in the order they appear in the manifest. Move the referenced stage before the current stage, or reference an earlier stage instead. The execution order must follow a linear progression. #### Stage Named Reserved Word (cal-manifest-005)[​](#cal-manifest-005 "Direct link to Stage Named Reserved Word (cal-manifest-005)") > 🔍 Stage name is reserved and cannot be used. > > 🔧 The specified stage name is reserved for manifest parsing. Please rename this stage to something else. Reserved words are used internally by the manifest system and cannot be used as stage names. --- # MetriCal Commands MetriCal is a powerful program with many different tools to help you calibrate your system. Each one is designed for a specific purpose, and can be used in conjunction with other commands to achieve your desired calibration results. ## Manifest or CLI?[​](#manifest-or-cli "Direct link to Manifest or CLI?") MetriCal commands can be run either through a manifest file (see the [New](/metrical/commands/manifest/new.md) and [Run](/metrical/commands/manifest/run.md) commands) or directly via the command line interface. The manifest file is a TOML file that contains all the necessary information to run a standard calibration workflow, while the CLI allows you to run commands one-by-one directly from the terminal. Though there's no "right" way to use MetriCal, we recommend using the manifest file for most users, as it allows for easier reproducibility and version control while also double-checking dependencies and I/O logic. The CLI is best suited for debugging and rapid testing; in fact, there are commands specifically designed for debugging purposes that are not available for use in the manifest file. ## Global Arguments[​](#global-arguments "Direct link to Global Arguments") [![Global Arguments](/_assets/global_arguments.jpg)](/metrical/commands/global_arguments.md) ### [Global Arguments](/metrical/commands/global_arguments.md) [Overview of global arguments available in MetriCal commands.](/metrical/commands/global_arguments.md) [View Details →](/metrical/commands/global_arguments.md) ## MetriCal Manifest[​](#metrical-manifest "Direct link to MetriCal Manifest") [![Manifest Overview](/_assets/manifest_overview.jpg)](/metrical/commands/manifest/manifest_overview.md) ### [Manifest Overview](/metrical/commands/manifest/manifest_overview.md) [What a manifest is and how it is used in MetriCal.](/metrical/commands/manifest/manifest_overview.md) [View Guide →](/metrical/commands/manifest/manifest_overview.md) [![Command: New](/_assets/manifest_new.jpg)](/metrical/commands/manifest/new.md) ### [Command: New](/metrical/commands/manifest/new.md) [Create a new MetriCal manifest with default configuration.](/metrical/commands/manifest/new.md) [View Command →](/metrical/commands/manifest/new.md) [![Command: Run](/_assets/manifest_run.jpg)](/metrical/commands/manifest/run.md) ### [Command: Run](/metrical/commands/manifest/run.md) [Run a MetriCal manifest. Execute an entire calibration workflow.](/metrical/commands/manifest/run.md) [View Command →](/metrical/commands/manifest/run.md) [![Command: Validate](/_assets/manifest_validate.jpg)](/metrical/commands/manifest/validate.md) ### [Command: Validate](/metrical/commands/manifest/validate.md) [Validate a MetriCal manifest for correctness and completeness.](/metrical/commands/manifest/validate.md) [View Command →](/metrical/commands/manifest/validate.md) ## Calibration And Processing[​](#calibration-and-processing "Direct link to Calibration And Processing") [![Command: Init](/_assets/calibrate_init.jpg)](/metrical/commands/calibration/init.md) ### [Command: Init](/metrical/commands/calibration/init.md) [Profile a dataset for calibration. Create or modify a Plex based on this data.](/metrical/commands/calibration/init.md) [View Command →](/metrical/commands/calibration/init.md) [![Command: Calibrate](/_assets/calibrate_calibrate.jpg)](/metrical/commands/calibration/calibrate.md) ### [Command: Calibrate](/metrical/commands/calibration/calibrate.md) [Calibrate a sensor system given an initial plex and object space configuration.](/metrical/commands/calibration/calibrate.md) [View Command →](/metrical/commands/calibration/calibrate.md) [![The Shape Commands](/_assets/calibrate_shape.jpg)](/metrical/commands/calibration/shape/shape_overview.md) ### [The Shape Commands](/metrical/commands/calibration/shape/shape_overview.md) [Modify a plex into any number of different helpful output formats.](/metrical/commands/calibration/shape/shape_overview.md) [View Command →](/metrical/commands/calibration/shape/shape_overview.md) [![Command: Consolidate](/_assets/calibrate_survey.jpg)](/metrical/commands/calibration/consolidate.md) ### [Command: Consolidate](/metrical/commands/calibration/consolidate.md) [Consolidate compatible targets into a single object space.](/metrical/commands/calibration/consolidate.md) [View Command →](/metrical/commands/calibration/consolidate.md) ## CLI Utilities[​](#cli-utilities "Direct link to CLI Utilities") [![Command: Report](/_assets/utilities_report.jpg)](/metrical/commands/cli_utilities/report.md) ### [Command: Report](/metrical/commands/cli_utilities/report.md) [Generate the report from the results of a calibration.](/metrical/commands/cli_utilities/report.md) [View Command →](/metrical/commands/cli_utilities/report.md) [![Command: Display](/_assets/utilities_display.png)](/metrical/commands/cli_utilities/display.md) ### [Command: Display](/metrical/commands/cli_utilities/display.md) [Render a dataset. Apply the calibrated plex and render the fused data.](/metrical/commands/cli_utilities/display.md) [View Command →](/metrical/commands/cli_utilities/display.md) [![Command: Completion](/_assets/utilities_completion.jpg)](/metrical/commands/cli_utilities/completion.md) ### [Command: Completion](/metrical/commands/cli_utilities/completion.md) [Generate autocomplete files for common shell variants.](/metrical/commands/cli_utilities/completion.md) [View Command →](/metrical/commands/cli_utilities/completion.md) ## Errors and Troubleshooting[​](#errors-and-troubleshooting "Direct link to Errors and Troubleshooting") [![Errors and Troubleshooting](/_assets/commands_errors.jpg)](/metrical/commands/command_errors.md) ### [Errors and Troubleshooting](/metrical/commands/command_errors.md) [Common errors, and how to address them.](/metrical/commands/command_errors.md) [View Command →](/metrical/commands/command_errors.md) --- # Global Arguments Most commands in MetriCal have their own options and settings. However, there are a few common arguments that are applicable to all commands. #### --license \[LICENSE][​](#--license-license "Direct link to --license \[LICENSE]") > The license key for MetriCal. > > There are three ways to pass in a license key: > > * Set it via the command line argument `--license` > * Set the `TANGRAM_VISION_LICENSE` environment variable > * Set it in the Tangram config TOML, typically located at `$HOME/.config/tangram-vision/config.toml` > > License keys are checked and validated in this order. Read more about licensing in the [licensing usage guide](/metrical/configuration/license_usage.md). #### --license-timeout \[LICENSE\_TIMEOUT][​](#--license-timeout-license_timeout "Direct link to --license-timeout \[LICENSE_TIMEOUT]") > Default: 10 > > The timeout (in seconds) for the calls to the Tangram licensing server. Used when making licensing http requests. #### --report-path \[REPORT\_PATH][​](#--report-path-report_path "Direct link to --report-path \[REPORT_PATH]") > MetriCal can actually write the TUI output of any command to an HTML file. The path to save the TUI output to. you can also just redirect stderr, though this will subsume the interactive output. #### -vv, -v, -q, -qq, -qqq[​](#-vv--v--q--qq--qqq "Direct link to -vv, -v, -q, -qq, -qqq") > The logging verbosity for MetriCal. > > MetriCal uses the [log](https://docs.rs/log/latest/log/) crate to produce logs at various levels of priority. Users can set the log level by passing verbosity flags to any MetriCal command: > > * `-vv`: Trace > * `-v`: Debug > * default: Info > * `-q`: Warn > * `-qq`: Error > * `-qqq`: Off > > Some outputs will change slightly depending on the verbosity level to avoid spamming your console. #### --color \[WHEN][​](#--color-when "Direct link to --color \[WHEN]") > Possible values: `enabled`, `disabled`, `auto` > > Default: `auto` > > Whether or not to use ANSI color codes in the terminal output. If set to `auto` MetriCal will attempt to detect if the output is a TTY and enable colors accordingly. Does not affect the colors output to the report.html file if one is generated #### --progress-indicators \[WHEN][​](#--progress-indicators-when "Direct link to --progress-indicators \[WHEN]") > Possible values: `enabled`, `disabled`, `auto` > > Default: `auto` > > Whether or not to enable progress bars and other animated displays in the terminal output. If set to `auto`, MetriCal will attempt to detect if the output is a TTY and enable progress bars accordingly #### -V, --version[​](#-v---version "Direct link to -V, --version") > Print the current version of MetriCal #### -h, --help[​](#-h---help "Direct link to -h, --help") > Print the help message for this command or function. -h prints a summary of the documentation presented by --help. --- # Manifest Overview The MetriCal manifest is a central configuration file that defines the workspace for your calibration tasks. It specifies the sensors, calibration targets, data sources, and various settings required for MetriCal to perform calibrations effectively. Using a manifest allows you to: * Define complex calibration setups involving multiple sensors and targets * Reuse configurations across different calibration sessions * Maintain consistency in calibration parameters and settings * Verify the order of operations of your calibration setup They're handy things, useful for development and production alike! Manifest Creation with `metrical new` You should never have to write a manifest from scratch! Use the [`metrical new`](/metrical/commands/manifest/new.md) command to create a new manifest with default settings, which you can then modify to suit your needs. ## The Anatomy of a Manifest[​](#the-anatomy-of-a-manifest "Direct link to The Anatomy of a Manifest") There are two main sections to a MetriCal manifest: the Project and the Stages. Let's break down a simple example manifest to understand what's going on: metrical.toml ``` # Project Metadata #------------------- [project] name = "MetriCal Demo Manifest" version = "15.0.0" description = "Demo calibration manifest" workspace = "/home/cal_engineer/metrical_workspace/" # Project Variables #------------------- [project.variables.data-dir] description = "The directory where our data and object space live" value = "/my/data/dir" [project.variables.object-space] description = "Our object space file for this calibration" value = "my_cal_objects.json" [project.variables.dataset] description = "Our dataset for this calibration" value = "my_cal.mcap" # Stages #------------------- [stages.first-stage] command = "init" dataset = "{{variables.data-dir}}/{{variables.dataset}}" ... # All of our init arguments go here initialized-plex = "{{auto}}" [stages.second-stage] command = "calibrate" dataset = "{{variables.data-dir}}/{{variables.dataset}}" input-plex = "{{first-stage.initialized-plex}}" input-object-space ="{{variables.data-dir}}/{{variables.object-space}}" ... # All of our calibration arguments go here detections = "{{auto}}" results = "{{auto}}" ``` ## Project Metadata[​](#project-metadata "Direct link to Project Metadata") ``` [project] name = "MetriCal Demo Manifest" version = "15.0.0" description = "Demo calibration manifest" workspace = "/home/cal_engineer/metrical_workspace/" ``` This holds metadata about the manifest, as well as the workspace path where all outputs will be stored. Unless otherwise directed, manifests will automatically create the workspace directory to hold all outputs; each stage's outputs will be stored in a subdirectory named after the stage. | Key | Description | | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | name | The name of the project. | | version | The version of the project. This is not tied to MetriCal's versioning, but can be used for your own tracking purposes. | | description | A brief description of the project. | | workspace | The path to the workspace directory where all outputs will be stored. This can be overridden in the [Run](/metrical/commands/manifest/run.md) command using the `--workspace` argument. | ## Project Variables[​](#project-variables "Direct link to Project Variables") ``` [project.variables.data-dir] description = "The directory where our data and object space live" value = "/my/data/dir" [project.variables.object-space] description = "Our object space file for this calibration" value = "my_cal_objects.json" [project.variables.dataset] description = "Our dataset for this calibration" value = "my_cal.mcap" ``` Manifests allow the user to define variables for use in the Manifest's Stages. Each variable can be referenced as an input to a manifest stage using the syntax `{{variables.}}`. We'll see an example of this in the Stages section below. ### Override Variables with `--set`[​](#override-variables-with---set "Direct link to override-variables-with---set") When running a manifest using the [Run](/metrical/commands/manifest/run.md) command, you can override any variable defined in the manifest using the `--set` argument. For example, to override the `data-dir` variable defined above, you could run: ``` metrical run /path/to/metrical.toml --set data-dir:/new/data/dir ``` ## Stages[​](#stages "Direct link to Stages") ``` [stages.first-stage] command = "init" dataset = "{{variables.data-dir}}/{{variables.dataset}}" ... # All of our init arguments go here initialized-plex = "{{auto}}" [stages.second-stage] command = "calibrate" dataset = "{{variables.data-dir}}/{{variables.dataset}}" input-plex = "{{first-stage.initialized-plex}}" input-object-space ="{{variables.data-dir}}/{{variables.object-space}}" ... # All of our calibration arguments go here detections = "{{auto}}" results = "{{auto}}" ``` Each stage in the manifest has both a name (`first-stage`, `second-stage`, whatever) and a command. From there, the stage inputs, config, and outputs are defined by that command's CLI arguments. If you don't know what a command's argument does, just run `metrical --help` for the full details (or, uh, read these docs). ### Stage Execution Order[​](#stage-execution-order "Direct link to Stage Execution Order") Note that the order of execution is determined by the input and output dependencies of each stage, not the ordering in the manifest file itself. Manifests will not process if there is a cyclic dependency between stages or a missing input dependency. ### The `{{auto}}` Keyword[​](#the-auto-keyword "Direct link to the-auto-keyword") It's common for stages to produce outputs that are then used as inputs to later stages. To make this easier, MetriCal supports the special `{{auto}}` keyword for stage outputs. When a stage output is set to `{{auto}}`, MetriCal will automatically generate an appropriate output file path within the workspace for that output. This is especially useful for outputs like initialized plexes, detections, and results files that are commonly passed between stages. Note that you don't have to use `{{auto}}` for outputs; you can specify explicit paths if you prefer, and the manifest will use those instead. ### Referencing Stage Outputs[​](#referencing-stage-outputs "Direct link to Referencing Stage Outputs") Stages can reference the outputs of previous stages using the syntax `{{.}}`. In the example above, the `second-stage` references the `initialized-plex` output of the `first-stage` using `{{first-stage.initialized-plex}}`. This allows for seamless chaining of stages where the output of one stage becomes the input to another. --- # MetriCal Command: New ## Usage[​](#usage "Direct link to Usage") New - CLI Example ``` metrical new [OPTIONS] [MANIFEST_PATH] ``` ## Purpose[​](#purpose "Direct link to Purpose") This command initializes a new MetriCal manifest file `metrical.toml` with default configuration settings at the specified `MANIFEST_PATH`. To make your life easier when setting up new projects, everything that's outlined in the [manifest overview](/metrical/commands/manifest/manifest_overview.md) is included in this default config as comments. ## Examples[​](#examples "Direct link to Examples") #### Create a new MetriCal manifest in the specified directory.[​](#create-a-new-metrical-manifest-in-the-specified-directory "Direct link to Create a new MetriCal manifest in the specified directory.") > ``` > metrical new /path/to/directory > ``` > > ...where `/path/to/directory` is the directory in which to create the new manifest file `metrical.toml`. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). --- # MetriCal Command: Run ## Usage[​](#usage "Direct link to Usage") Run - CLI Example ``` metrical run [OPTIONS] [MANIFEST_PATH] --workspace [WORKSPACE] ``` ## Purpose[​](#purpose "Direct link to Purpose") This command executes a MetriCal manifest file located at the specified `MANIFEST_PATH`. The manifest contains all the necessary configuration and parameters to carry out a complete calibration process. See the [manifest overview](/metrical/commands/manifest/manifest_overview.md) for more details. Note that the order of execution is determined by the input and output dependencies of each stage, not the ordering in the manifest file itself. Manifests will not process if there is a cyclic dependency between stages or a missing input dependency. ### Automatic Reporting[​](#automatic-reporting "Direct link to Automatic Reporting") Unlike other modes, the Run command automatically generates an HTML report for the entire manifest run in the final workspace directory. If the [`--report-path` option](/metrical/commands/global_arguments.md#--report-path-report_path) is provided, however, the report will be saved to the specified location. ### Error Handling[​](#error-handling "Direct link to Error Handling") If any stage fails during execution, the entire manifest run will halt immediately and return the exit code, along with any diagnostic information. This ensures that subsequent stages do not run with incomplete or invalid data. Users should review the error messages provided to diagnose and resolve issues before re-running the manifest. ## Examples[​](#examples "Direct link to Examples") #### Run a MetriCal manifest located in the specified directory.[​](#run-a-metrical-manifest-located-in-the-specified-directory "Direct link to Run a MetriCal manifest located in the specified directory.") > ``` > metrical run /path/to/directory/.toml > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[MANIFEST\_PATH][​](#manifest_path "Direct link to \[MANIFEST_PATH]") > The path to the TOML manifest. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### --workspace \[WORKSPACE][​](#--workspace-workspace "Direct link to --workspace \[WORKSPACE]") > The output directory for the final results. By default, if no workspace is provided, the workspace defined in the manifest file will be used. > > Results are stored in subdirectories per step. For example, if the output directory is `./output`, and there are steps named `init` and `calibrate`, then results will be stored in `./output/init` and `./output/calibrate`. #### --set \[OVERRIDE\_VARIABLES][​](#--set-override_variables "Direct link to --set \[OVERRIDE_VARIABLES]") > Override variables in the manifest. > > Say we have a variable named `data-to-run`: > > ``` > [project.variables.data-to-run] > description = "The dataset to use for this run" > value = "default_dataset.mcap" > ``` > > However, we don't want to use `default_dataset.mcap` for this next run; we have another dataset, `better_dataset.mcap`, that would work better. > > MetriCal makes this easy with the `--set` option. You can overwrite this default value from the command line like so: `--set data-to-run:better_dataset.mcap`. Now, MetriCal will use this new value for the run instead of the default. --- # MetriCal Command: Validate ## Usage[​](#usage "Direct link to Usage") Validate - CLI Example ``` metrical validate [OPTIONS] [MANIFEST_PATH] --workspace [WORKSPACE] ``` ## Purpose[​](#purpose "Direct link to Purpose") This command validates a MetriCal manifest file located at the specified `MANIFEST_PATH`, but doesn't actually run the process described in the manifest. This is useful for checking that your manifest is correctly configured before executing potentially time-consuming calibration tasks. The validation process checks for issues such as missing dependencies, incorrect configurations, and cyclic dependencies between stages. ## Examples[​](#examples "Direct link to Examples") #### Validate a MetriCal manifest located in the specified directory.[​](#validate-a-metrical-manifest-located-in-the-specified-directory "Direct link to Validate a MetriCal manifest located in the specified directory.") > ``` > metrical validate /path/to/directory/.toml > ``` ## Arguments[​](#arguments "Direct link to Arguments") #### \[MANIFEST\_PATH][​](#manifest_path "Direct link to \[MANIFEST_PATH]") > The path to the TOML manifest. ## Options[​](#options "Direct link to Options") #### Global Arguments[​](#global-arguments "Direct link to Global Arguments") As with every command, all [global arguments](/metrical/commands/global_arguments.md) are supported (though not all may be used). #### --workspace \[WORKSPACE][​](#--workspace-workspace "Direct link to --workspace \[WORKSPACE]") > The output directory for the final results. By default, if no workspace is provided, the workspace defined in the manifest file will be used. > > Results are stored in subdirectories per step. For example, if the output directory is `./output`, and there are steps named `init` and `calibrate`, then results will be stored in `./output/init` and `./output/calibrate`. #### --set \[OVERRIDE\_VARIABLES][​](#--set-override_variables "Direct link to --set \[OVERRIDE_VARIABLES]") > Override variables in the manifest. > > Say we have a variable named `data-to-run`: > > ``` > [project.variables.data-to-run] > description = "The dataset to use for this run" > value = "default_dataset.mcap" > ``` > > However, we don't want to use `default_dataset.mcap` for this next run; we have another dataset, `better_dataset.mcap`, that would work better. > > MetriCal makes this easy with the `--set` option. You can overwrite this default value from the command line like so: `--set data-to-run:better_dataset.mcap`. Now, MetriCal will use this new value for the run instead of the default. --- # Valid Data Formats * MCAP Files * ROS1 Bags * Folders ## MCAP Files[​](#mcap-files "Direct link to MCAP Files") [MCAP](//mcap.dev) is a flexible serialization format that supports a wide range of options and message encodings. This includes the capability to encode ROS1, ROS2/CDR serialized, Protobuf, Flatbuffer, and more. Header Time vs Log Time MetriCal uses the *header timestamp* in each message for synchronization purposes. If your current workflow uses the log time instead, you should make sure that the header timestamp is *also* populated during recording. If not, MetriCal will not be able to synchronize your data correctly. ### Message Encodings[​](#message-encodings "Direct link to Message Encodings") MetriCal only supports a limited subset of the total [well-known message encodings](//mcap.dev/spec/registry#message-encodings) in MCAP. | Message Type | ROS1 Encodings | ROS2 with CDR Serialization | Protobuf Serialization Schemas | | --------------- | ------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | | Image | [sensor\_msgs/Image](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html) | [sensor\_msgs/msgs/Image](//github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/Image.msg) | [RawImage.proto](https://github.com/foxglove/schemas/blob/main/schemas/proto/foxglove/RawImage.proto) | | CompressedImage | [sensor\_msgs/CompressedImage](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CompressedImage.html) | [sensor\_msgs/msgs/CompressedImage](//github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/CompressedImage.msg) | [CompressedImage.proto](https://github.com/foxglove/schemas/blob/main/schemas/proto/foxglove/CompressedImage.proto) | | PointCloud | [sensor\_msgs/PointCloud2](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/PointCloud2.html) | [sensor\_msgs/msgs/PointCloud2](//github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/PointCloud2.msg) | [PointCloud.proto](https://github.com/foxglove/schemas/blob/main/schemas/proto/foxglove/PointCloud.proto) | | IMU | [sensor\_msgs/Imu](//docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Imu.html) | [sensor\_msgs/msgs/Imu](//github.com/ros2/common_interfaces/blob/iron/sensor_msgs/msg/Imu.msg) | -- | | Odometry | [nav\_msgs/Odometry](//docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html) | [nav\_msgs/msgs/Odometry](//github.com/ros2/common_interfaces/blob/iron/nav_msgs/msg/Odometry.msg) | -- | | H264 Video | -- | -- | [CompressedVideo.proto](https://github.com/foxglove/schemas/blob/main/schemas/proto/foxglove/CompressedVideo.proto) | ### Valid Image Encodings[​](#valid-image-encodings "Direct link to Valid Image Encodings") | Image type | Encoding | | --------------------- | -------------------------------------------------------------------------------------------------------------------------- | | Ordered pixels | `mono8`, `mono16`, `rgb8`, `bgr8`, `rgba8`, `bgra8`, `rgb16`, `bgr16`, `rgba16`, `bgra16` | | 8-bit, multi-channel | `8UC1`, `8UC2`, `8UC3`, `8UC4`, `8SC1`, `8SC2`, `8SC3`, `8SC4` | | 16-bit, multi-channel | `16UC1`, `16UC2`, `16UC3`, `16UC4`, `16SC1`, `16SC2`, `16SC3`, `16SC4` | | 32-bit, multi-channel | `32UC1`, `32UC2`, `32UC3`, `32UC4`, `32SC1`, `32SC2`, `32SC3`, `32SC4` | | 64-bit, multi-channel | `64UC1`, `64UC2`, `64UC3`, `64UC4`, `64SC1`, `64SC2`, `64SC3`, `64SC4` | | Bayer images | `bayer_rggb8`, `bayer_bggr8`, `bayer_gbrg8`, `bayer_grbg8`, `bayer_rggb16`, `bayer_bggr16`, `bayer_gbrg16`, `bayer_grbg16` | | YUYV images | `uyvy`, `UYVY`, `yuv422`, `yuyv`, `YUYV`, `yuv422_yuy2` | ## ROS1 Bags[​](#ros1-bags "Direct link to ROS1 Bags") ROS1 itself is no longer supported. However, that doesn't mean you can't use MetriCal! Instead, just convert your ROS1 bags to MCAP using the [MCAP CLI tool](https://mcap.dev/guides/cli): ``` mcap convert [your].bag [your].mcap ``` Then, use the resulting MCAP file with MetriCal as normal. ## Folders[​](#folders "Direct link to Folders") If your system does not use ROS or encode data as an MCAP file at all, you can still use MetriCal by providing it with recursive folders of structured data. This approach is a little easier to get started with compared to using MCAP files, but can leave a lot of performance (and data size, as MCAP files are often compressed-by-default) on the table. ### Message Types[​](#message-types "Direct link to Message Types") | Type | Guidelines | | ----------- | -------------------------------------------------------------------------------------------------- | | Image | Must be in JPEG or PNG format | | Point Cloud | Must be in PCD format. See more on [valid PCD messages](#pointcloud2-messages-and-pcd-files) below | ### Folder format description[​](#folder-format-description "Direct link to Folder format description") MetriCal assumes that the folder layout looks something like the following: ``` . └── data/ <--- passed as $DATA argument in CLI ├── topic_1/ ├── topic_2/ └── topic_3/ ``` ![Calibration Data Folder Structure](/assets/images/cal_data_folder_structure-6fa37a1ad675b48debf0b5c61c77a409.png) Where each directory contains inputs whose file names correspond to timestamps (canonically, in nanoseconds). For example, if we had a message topic named `camera_1`, we might have the following example tree of files: ``` . ├── camera_1/ │   ├── 1643230518150000000.png │   ├── 1643230523197000000.png │   ├── 1643230526125000000.png │   ├── 1643230529419000000.png │   ├── 1643230532161000000.png │   └── 1643230537869000000.png ... ``` The folder names should correspond to the default topic names (e.g. such as in a ROS bag). Every topic does not need to have the same number of messages, or even share exactly the same timestamps as other topics. However, these timestamps will be assumed to be synced according to what is provided in the [plex](/metrical/core_concepts/plex_overview.md). ### PointCloud2 messages and PCD Files[​](#pointcloud2-messages-and-pcd-files "Direct link to PointCloud2 messages and PCD Files") MetriCal has partial support for v0.7 of the [pcd file format](//pointclouds.org/documentation/tutorials/pcd_file_format.html) for reading point cloud data. Since pcd is a very flexible format, we impose the following restrictions on pointclouds we can process: * Fields named "x", "y" and "z" are all present (referred to as Geometric data) * Geometric data fields are floating point (`f32`, `float`) values * One of the fields "intensity", "i", or "reflectivity" is present (referred to as Intensity data) * Intensity data contains data of one of the following types * Unsigned 8 bit integer (`u8`, `uint8_t`) * Unsigned 16 bit integer (`u16`, `uint16_t`) * Unsigned 32 bit integer (`u32`, `uint32_t`) * 32 bit floating point (`f32`, `float`) * Each of the Geometric data and Intensity data fields contains precisely 1 count ### PCD Restrictions[​](#pcd-restrictions "Direct link to PCD Restrictions") In addition to LiDAR / point cloud data needing to include the fields described above, we make some additional restrictions for PCD files when reading in data in the folder format: * The pcd data format is either ascii or binary * Explicitly, "`binary_compressed`" is not yet supported As an example, a pcd with the header below would be supported. ``` # .PCD v.7 - Point Cloud Data file format VERSION .7 FIELDS x y z intensity SIZE 4 4 4 4 TYPE F F F F COUNT 1 1 1 1 WIDTH 213 HEIGHT 1 VIEWPOINT 0 0 0 1 0 0 0 POINTS 213 DATA binary ``` Whereas, a pcd with the header below would not be supported ``` # .PCD v.7 - Point Cloud Data file format VERSION .7 FIELDS x y z intensity SIZE 4 4 4 4 TYPE F F F I COUNT 3 3 3 1 WIDTH 213 HEIGHT 1 VIEWPOINT 0 0 0 1 0 0 0 POINTS 213 DATA binary_compressed ``` --- # Group Management R\&DGrowthEnterprise A new group is automatically created for every account on the Tangram Vision Hub upon signup. Manage group details on the “Group” page. ## Naming Your Group[​](#naming-your-group "Direct link to Naming Your Group") You can create a name for your organization by selecting the pencil icon to the right of the default title “Your Organization Name”. You can always go back and update this name later if you decide to change it. ![Screenshot of where to rename your group](/assets/images/hub_name_group-59e3c6d067cd99274018878dbd981913.png) ## Adding Group Members & Assigning Roles[​](#adding-group-members--assigning-roles "Direct link to Adding Group Members & Assigning Roles") To invite a member to your group, go to the “Invite a New Member” section of the Group administration page. ![Screenshot of the form to invite a new group member](/assets/images/hub_send_invite-179936522f4c5b4741416c4e1222315d.png) Enter the email address of the new group member that you would like to invite. Select their role. Then press the “Send Invite Email” button to send an invitation to the new group member. *Note: Unredeemed invites expire after 7 days. You can always resend an invitation if it hasn’t been redeemed.* About Roles There are two available roles in the Tangram Vision Hub: Admin and Member. Members have basic privileges. Members can: * create or revoke personal licenses * create or revoke group licenses * see other group members Admins have more privileges than members. Admins can do everything members can do, along with the following: * invite new members * remove existing members * edit the role of existing members * edit the organization name * subscribe to services * add or edit billing methods * edit the billing email address ## Managing Group Members[​](#managing-group-members "Direct link to Managing Group Members") If you are a group admin, there are two controls available to manage group members. ### Assigning Roles[​](#assigning-roles "Direct link to Assigning Roles") If you’d like to upgrade a member to an admin, or downgrade an admin to a member, you can do that on the group administration page under the “Members” heading. Each group member has an assigned role that can be changed by selecting the dropdown under the “Role” column. Simply select that dropdown menu in the row of the user whose role you wish to change, and make the change. Once you have changed the role in the dropdown menu, that change is automatically saved. You aren’t allowed to downgrade yourself from admin to member, to prevent a group from becoming admin-less. If you want to be downgraded, ask another admin to demote you. ### Removing Group Members[​](#removing-group-members "Direct link to Removing Group Members") If you wish to remove a member from a group, you can also do that on the group administration page under the “Members” heading. Find the user who you wish to remove from the group, and then click the “Remove From Group” text under the “Actions” column. A dialogue will appear asking you to confirm this decision. Click “OK” to proceed with removing the user from the group, or “Cancel” to cancel the action. IMPORTANT! When you remove a user from your group, you will also remove their personal licenses to use the Tangram Vision software. If you have infrastructure or processes that depend on the user’s personal licenses, they will no longer pass license checks. Please make sure that you want to perform this action before you proceed. ## Joining a Group[​](#joining-a-group "Direct link to Joining a Group") To join a group, you must be invited by a group admin. If you have not received an invitation to join a group, please contact your organization’s group admin and request an invitation. If you have received an invitation to join a group, click the link in the email that says “View your Tangram Vision Hub Invite”. You will be taken to the Tangram Vision Hub login page, where you can create a new account using a Google account, a GitHub account, or your email. Once you are in the Hub, you will be taken to a landing page where you can accept the invitation to join a group. Click “Accept Invite” - now you’ve joined your organization’s group! ![Screenshot of page where you can accept an invitation to a group](/assets/images/hub_accept_invite-35c2ffb348d2b83fd859d5951529be77.png) If you see an error page instead of the “Accept Invite” page, please check the following: * Your invite may have expired or been revoked. Please contact your group admin for a new invite. * You already redeemed this invite. Please check the Group page in the Hub to confirm that you are in the group that you expect. * You signed into the Hub with a different email address than was invited. Please log out and follow the “View your Tangram Vision Hub Invite” link from the invite email again, signing in with the email address that was invited. Should you ever wish to leave the group, contact your admin to request that they remove you from the group. --- # Installation ## Recommended: MCAP CLI tool[​](#recommended-mcap-cli-tool "Direct link to Recommended: MCAP CLI tool") MetriCal relies on MCAP files extensively. Not only does MetriCal natively support MCAP files for data input, but it also uses MCAP as an output format for calibration results and detections. It's strongly recommended to install the MCAP CLI tool if you plan to manipulate or extract results in your own software. [Click here for the MCAP CLI installation guide.](https://mcap.dev/guides/cli) ## MetriCal Installation Methods[​](#metrical-installation-methods "Direct link to MetriCal Installation Methods") There are two ways to install MetriCal. If you are using Ubuntu or Pop!\_OS, you can install MetriCal via the `apt` package manager. If you are using a different operating system, or if your system configuration needs it, you can install MetriCal via Docker. * Apt Repository * Docker ## Install MetriCal via Apt Repository[​](#install-metrical-via-apt-repository "Direct link to Install MetriCal via Apt Repository") Tangram Vision maintains a repository for MetriCal on a private Cloudsmith instance ([cloudsmith.io](https://cloudsmith.io/)). Compatible OS: * Ubuntu 20.04 (Focal Fossa) * Ubuntu and Pop!\_OS 22.04 (Jammy Jellyfish) * Ubuntu and Pop!\_OS 24.04 (Noble Numbat) ### Stable Releases[​](#stable-releases "Direct link to Stable Releases") metrical\_install.sh ``` curl -1sLf \ 'https://dl.cloudsmith.io/public/tangram-vision/apt/setup.deb.sh' \ | sudo -E bash sudo apt update; sudo apt install metrical; ``` ### Release Candidates[​](#release-candidates "Direct link to Release Candidates") metrical\_install\_rc.sh ``` curl -1sLf \ 'https://dl.cloudsmith.io/public/tangram-vision/apt-rc/setup.deb.sh' \ | sudo -E bash sudo apt update; sudo apt install metrical; ``` ## Use MetriCal with Docker[​](#use-metrical-with-docker "Direct link to Use MetriCal with Docker") Every version of MetriCal is available as a Docker image. Even visualization features can be run from Docker using Rerun as a separate process. For Docker When you see a section like this in the documentation, it refers to a Docker-specific feature or method. Keep an eye out for them! ### 1. Install Docker[​](#1-install-docker "Direct link to 1. Install Docker") If you do not have Docker installed, follow the instructions to do so at ### 2. Download MetriCal via Docker[​](#2-download-metrical-via-docker "Direct link to 2. Download MetriCal via Docker") There are two types of MetriCal releases. All releases can be found [listed on Docker Hub](https://hub.docker.com/r/tangramvision/cli/tags). #### Stable Release[​](#stable-release "Direct link to Stable Release") Stable releases are an official version bump for MetriCal. These versions are verified and tested by Tangram Vision and MetriCal customers. They are guaranteed to have a stable API and follow SemVer. Find documentation for these releases under their version number in the nav bar. Stable releases can be pulled using the following command: ``` docker pull tangramvision/cli:latest ``` Install a specific version with a command like: ``` docker pull tangramvision/cli:13.0.0 ``` #### Release Candidates[​](#release-candidates-1 "Direct link to Release Candidates") Release candidates are versions of MetriCal that include useful updates, but are relatively untested or unproven and could contain bugs. As time goes on, release candidates will either evolve into stable releases or be replaced by newer release candidates. The latest release candidate can be referenced with the `dev-latest` alias, and can be installed by running the following: ``` docker pull tangramvision/cli:dev-latest ``` Specific release candidates can be installed the same way you'd install any other versioned release (see above). *** With that, you should now have a MetriCal instance on your machine! We'll assume the Stable release (`tangramvision/cli:latest`) for the rest of the introduction. ### 3. Create a MetriCal Docker Alias[​](#3-create-a-metrical-docker-alias "Direct link to 3. Create a MetriCal Docker Alias") Throughout the documentation, you will see references to `metrical` in the code snippets. This is a named bash function describing a larger docker command. For convenience, it can be useful to include that function (outlined below) in your script or shell config file (e.g. `~/.bashrc`): \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ "$@"; } ``` Now you should be able to run `metrical` wherever! `--volume` and `--workdir` The `--volume` flag syntax represents a bridge between the host machine and the docker instance. If your data is contained within the directory `/home/user/datasets`, then you would replace `$MOUNT` with `/home/user/datasets`. `--workdir` indicates that we're now primarily working in the `/datasets` directory within the docker container. All subsequent MetriCal commands are run as if from that `/datasets` directory. --- # License Creation R\&DGrowthEnterprise License Deprecation With MetriCal v14 (released in May 2025), version 1 license keys (prefixed with `key/`) have been **deprecated and will no longer work after November 1st, 2025**. If you use license keys prefixed with `key/`, please create new license keys (which will be version 2 keys, prefixed with `key2/`) and use them instead. ## Why Get A License?[​](#why-get-a-license "Direct link to Why Get A License?") MetriCal is available for download and evaluation without a license requirement. However, accessing and exporting calibration results requires a valid license key. Tangram Vision Hub All of the following happens in the Tangram Vision Hub. If you don't have an account, sign up for one at [hub.tangramvision.com](https://hub.tangramvision.com). ## Personal Licenses[​](#personal-licenses "Direct link to Personal Licenses") To create a license, navigate to your “Account” page in the Hub. In the Personal Licenses section, you will be able to create a new license. First, choose a name for the license. A good idea is to associate the license name with the specific device it will be used with. Once you have added a name, click “Create” and a new license will be created. ![Screenshot of personal licenses table and form with a license and usage instructions shown](/assets/images/hub_personal_licenses-ae988d537659326cbc20427d37a51202.png) Upon creating a license, you will see the license key along with a brief explanation of the different ways that you can provide the license key to MetriCal, along with a [link to more detailed instructions](/metrical/configuration/license_usage.md). To revoke a personal license, click the “REVOKE” button in the “Actions” tab. Running Tangram Vision software with that license will thereafter return an error. If a user leaves or is removed from a group, their personal licenses are automatically revoked. ## Group Licenses[​](#group-licenses "Direct link to Group Licenses") Unlike personal licenses, group licenses are not tied to a particular user and will not be automatically revoked if the license’s creator leaves or is removed from the group. To create a license, navigate to the “Group” page in the Hub. In the Group Licenses section, you will be able to create a new license. First, choose a name for the license. A good idea is to associate the license name with the specific device or process it will be used with. Once you have added a name, click “Create” and a new license will be created. Role Required Users in both “admin” and “member” roles can create and revoke group licenses. ![Screenshot of group licenses table and form with a license and usage instructions shown](/assets/images/hub_group_licenses-59f4d3ef0886b0b356cda04ab47b3c77.png) To revoke a group license, click the “REVOKE” button in the “Actions” tab. Running Tangram Vision software with that license will thereafter return an error. --- # License Usage License Deprecation With MetriCal v14 (released in May 2025), version 1 license keys (prefixed with `key/`) have been **deprecated and will no longer work after February 1st, 2026**. If you use license keys prefixed with `key/`, please create new license keys (which will be version 2 keys, prefixed with `key2/`) and use them instead. MetriCal licenses are user-specific rather than machine-specific. A license key can be utilized on any system with internet connectivity that can establish a connection to Tangram Vision's authentication servers. For environments with limited connectivity, offline licensing options are available (detailed below). ## Using a License Key[​](#using-a-license-key "Direct link to Using a License Key") R\&DGrowthEnterprise MetriCal looks for license keys in 3 places, in this order: ### 1. Command Line Argument[​](#1-command-line-argument "Direct link to 1. Command Line Argument") Provide the key as an argument to the `metrical` command. metrical\_runner.sh ``` metrical --license="key2/" calibrate ... ``` For Docker This command line argument can be included directly in the `metrical` shell function by adding it before the `"$@"` line of your alias. \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ # Note the following line! --license="key2/" \ "$@"; } ``` ### 2. Environment Variable[​](#2-environment-variable "Direct link to 2. Environment Variable") Provide the key as a string in the environment variable `TANGRAM_VISION_LICENSE`. metrical\_runner.sh ``` TANGRAM_VISION_LICENSE="key2/" metrical calibrate ... ``` For Docker #### Using Environment Variables in Docker[​](#using-environment-variables-in-docker "Direct link to Using Environment Variables in Docker") If running MetriCal via Docker, the environment variable must be set *inside* the container. The [docker run documentation](https://docs.docker.com/reference/cli/docker/container/run/#env) shows various methods for setting environment variables inside a container. One example of how you can do this is to add an `--env` flag to the `docker run` invocation inside the `metrical` shell function that was shown above, which would then look like this: \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ # Note the following line! --env=TANGRAM_VISION_LICENSE="key2/" \ tangramvision/cli:latest \ "$@"; } ``` ### 3. Config File[​](#3-config-file "Direct link to 3. Config File") Provide the key as a string in a config TOML file, assigned to a top-level `license` key. This key should be placed in your config directory at `~/.config/tangram-vision/config.toml`. \~/.config/tangram-vision/config.toml ``` license = "key2/{your_key}" ``` For Docker #### Using a Config File in Docker[​](#using-a-config-file-in-docker "Direct link to Using a Config File in Docker") To use a config file in Docker, you’ll need to modify the `metrical` shell function by mounting the config file to the expected location. Use the following snippet, making sure to update `path/to/config.toml` to point to your config file. \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ # Note the following line! --volume=path/to/config.toml:/.config/tangram-vision/config.toml:ro \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ "$@"; } ``` ## Using a License Key Offline[​](#using-a-license-key-offline "Direct link to Using a License Key Offline") R\&DGrowthEnterprise MetriCal can validate a license via a local license-cache file, ensuring that internet hiccups don't cause license validation failures that interrupt critical calibration processes. In order to use MetriCal without an active internet connection, you must run any MetriCal command with an active internet connection once. (This can be as simple as running `metrical calibrate foo bar baz`, even if `foo`, `bar`, and `baz` files do not exist.) This will create a license-cache file that is valid (and enables offline usage of MetriCal) for 1 week. Every time MetriCal is run with an active connection, the license-cache file will be refreshed and valid for 1 week. If the license-cache file hasn't been refreshed in more than a week and MetriCal is run offline, it will exit with a "License-cache file is expired" error. For Docker Include an additional volume mount when running the docker container, so a license-cache file can persist between MetriCal runs. Update the `metrical` shell function to include the `--volume=metrical-license-cache:...` line shown below: \~/.bashrc ``` metrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$MOUNT":"/datasets" \ # The following line enables offline licensing --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ "$@"; } ``` --- # Visualization It's so nice to see your data! MetriCal provides visualization features to help you understand exactly what's going on with your calibrations, so you're never left in the dark. * Visual inspection of detections during the calibration process * Ability to verify the spatial alignment of different sensors * Confirmation that your calibration procedure is capturing the right data * Immediate visual feedback on calibration quality By properly setting up and using the visualization features in MetriCal, you can gain greater confidence in your calibration results and more easily troubleshoot any issues that arise during the calibration process. ## Setting up Visualization with Rerun[​](#setting-up-visualization-with-rerun "Direct link to Setting up Visualization with Rerun") MetriCal uses [Rerun](https://www.rerun.io) as its visualization engine. There are two ways MetriCal interacts with Rerun: * Option 1: MetriCal can spawn a Rerun server and connect to it directly. This is the default behavior. * Option 2: MetriCal can connect to another Rerun server running elsewhere (on host, in network, etc). For Docker When using MetriCal via Docker, you'll need to run a separate Rerun server on your host machine. ### Installing Rerun[​](#installing-rerun "Direct link to Installing Rerun") Install Rerun on your host machine using either `pip` or `cargo`: ``` # Option 1: via pip pip install rerun-sdk==0.20 # Option 2: via cargo cargo install rerun-cli --version ^0.20 ``` Match Versions! Make sure to install Rerun version 0.20 to ensure compatibility with MetriCal's current version. Rerun is a great tool, but it's still in heavy development, and there's no guarantee of backwards compatibility. For Docker ### Run Separate Rerun Server[​](#run-separate-rerun-server "Direct link to Run Separate Rerun Server") Before using any visualization in MetriCal, start a Rerun rendering server in a separate terminal: ``` rerun --memory-limit=1GB ``` Then, ensure your `docker run` command includes the host gateway configuration: ``` --add-host=host.docker.internal:host-gateway ``` This allows the Docker container to communicate with Rerun running on your host machine. ## Visualization in Different Modes[​](#visualization-in-different-modes "Direct link to Visualization in Different Modes") ### Display Mode[​](#display-mode "Direct link to Display Mode") The Display command is designed specifically for visualization, allowing you to see the applied results of your calibration. ``` metrical display [OPTIONS] $INPUT_DATA_PATH -p $PLEX_OR_RESULTS_PATH ``` The Display command visualizes the calibration results applied to your dataset in real-time, providing a quick "ocular validation" of your calibration quality. Read more about the [Display command here](/metrical/commands/cli_utilities/display.md). ### Calibrate Mode[​](#calibrate-mode "Direct link to Calibrate Mode") In Calibrate mode, you can visualize the detection process and calibration data by adding the `--render` flag: ``` metrical calibrate \ --render \ --output-json results.json \ $DATA $PLEX $OBJSPC ``` This allows you to see detections in real-time as the calibration process runs. Learn more about the [Calibrate command here](/metrical/commands/calibration/calibrate.md). ## Advanced Visualization Options[​](#advanced-visualization-options "Direct link to Advanced Visualization Options") ### Custom Render Socket[​](#custom-render-socket "Direct link to Custom Render Socket") If you're running Rerun on a non-default port or IP address, use the `--render-socket` option: ``` metrical display --render-socket="127.0.0.1:3030" $DATA -p $RESULTS ``` By default: * Docker setup: `host.docker.internal:9876` * Local setup: `127.0.0.1:9876` When running Rerun from its CLI, the IP would correspond to its `--bind` option and the port would correspond to its `--port` option. ## Troubleshooting Visualization[​](#troubleshooting-visualization "Direct link to Troubleshooting Visualization") If you're having trouble with visualization: 1. Make sure Rerun is running and listening on the correct port 2. Verify that you've included the `--add-host=host.docker.internal:host-gateway` flag when running MetriCal via Docker 3. Check that you're using a compatible version of Rerun (v0.20) 4. Try specifying the render socket explicitly with `--render-socket` --- # Components A ***component*** is an atomic sensing unit (for instance, a camera) that can output zero or more streams of observations. The observations that it produces inform its type. ## Common Features[​](#common-features "Direct link to Common Features") Every component contains some common fields. These are primarily used for identification within the Plex. | Field | Type | Description | | ----- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | UUID | `Uuid` | A universally unique identifier for the component. | | Name | `String` | A name to reference the component. These must be unique, and MetriCal will treat them as the "topic" name to match within the provided dataset. | ## Component Kinds[​](#component-kinds "Direct link to Component Kinds") ### Camera[​](#camera "Direct link to Camera") Cameras are a fundamental visual component. In addition to the common fields that every component contains, they are defined by the following types: | Field | Type | Description | | ----------- | -------------------------- | ------------------------------------------------------------------------------------ | | Intrinsics | A camera intrinsics object | Intrinsic parameters that describe the camera model. | | Covariance | Matrix of floats | An n×n covariance matrix describing the variance-covariance of intrinsic parameters. | | Pixel pitch | float | The metric size of a pixel in real space. If unknown, should be equal to 1.0. | Pixel pitch units It is common practice to leave most observations and arithmetic in units of pixels when dealing with image data. However, this practice can get confusing when trying to compare two different camera types, as "1 pixel" may not equate to the same metric error between cameras. Pixel pitch allows us to compare cameras using a common unit, i.e. units-per-pixel. Note that we leave the unit ambiguous here, as this field is primarily for making analysis easier on human eyes. Common units include microns-per-pixel (μm / pixel) and meters-per-pixel (m / pixel). #### Modeling[​](#modeling "Direct link to Modeling") MetriCal provides the following intrinsics models for cameras: | Model Name | Description | | ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | | `"no_distortion"` | [No distortion model applied](/metrical/calibration_models/cameras.md#no-distortion), i.e. an ideal pinhole model | | `"opencv-radtan"` | [OpenCV RadTan](/metrical/calibration_models/cameras.md#opencv-radtan) | | `"opencv-fisheye"` | [OpenCV Fisheye](/metrical/calibration_models/cameras.md#opencv-fisheye) | | `"opencv-rational"` | [OpenCV Rational](/metrical/calibration_models/cameras.md#opencv-rational), an extension of OpenCV RadTan with radial terms in the denominator | | `"pinhole_with_brown_conrady"` | [Inverse Brown-Conrady](/metrical/calibration_models/cameras.md#pinhole-with-inverse-brown-conrady), with correction in image space | | `"pinhole_with_kannala_brandt"` | [Inverse Kannala-Brandt](/metrical/calibration_models/cameras.md#pinhole-with-inverse-kannala-brandt), with correction in image space | | `"eucm"` | [Enhanced Unified Camera Model](/metrical/calibration_models/cameras.md#eucm), i.e. EUCM | | `"double-sphere"` | [Double Sphere](/metrical/calibration_models/cameras.md#double-sphere) | | `"omni"` | [Omnidirectional](/metrical/calibration_models/cameras.md#omnidirectional-omni) | | `"power-law"` | [Power Law](/metrical/calibration_models/cameras.md#power-law) | One might choose each of these models based on their application, or dependent upon what software they wish to be compatible with. In many cases, the choice of model may not matter as much as the data capture process. If you're having trouble deciding which model will fit your system best, [contact us](mailto:support@tangramvision.com) and we can help you understand the differences that will matter to you! *** ### LiDAR[​](#lidar "Direct link to LiDAR") MetriCal's categorization of LiDAR comprises a variety of similar yet slightly-different sensors. Some examples could be: * A Velodyne VLP-16 scanner * An Ouster OS2 scanner * A Livox HAP scanner * etc. All of the above LiDAR can be represented as LiDAR components within MetriCal. #### Modeling[​](#modeling-1 "Direct link to Modeling") There is no intrinsics model for LiDAR in MetriCal. Hence, the only model available is: | Model Name | Description | | ---------- | ---------------------------------------------------- | | `"lidar"` | Extrinsics calibration of lidar to other modalities. | *** ### IMU[​](#imu "Direct link to IMU") Inertial Measurement Units (IMU) measure specific-force (akin to acceleration) and rotational velocity. This is done through a combination of accelerometer and gyroscopic sensors. IMUs in MetriCal are a combined form of component as MEMS IMUs are rarely able to atomically and physically separate the accelerometer and gyroscope measurements. Like other component types, aside from the common fields that every component contains, IMUs are defined by the following types: | Field | Type | Description | | -------------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- | | Bias | An IMU bias | [IMU bias parameters](/metrical/calibration_models/imu.md#imu-bias) | | Intrinsics | An IMU intrinsics object | [IMU intrinsics model](/metrical/calibration_models/imu.md#imu-model-descriptions) | | Noise Parameters | An IMU noise parameters object | [Noise parameters](/metrical/calibration_models/imu.md#noise-parameters) | | Bias Covariance | A matrix of floats | A 6×6 covariance matrix describing the variance-covariance of the IMU bias parameters. | | Intrinsic Covariance | A matrix of floats | A n×n covariance matrix describing the variance-covariance of the IMU intrinsic parameters, with the value of n dependent on the intrinsics model used. | #### Modeling[​](#modeling-2 "Direct link to Modeling") The following models are provided as part of MetriCal: | Model Name | Description | | -------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | `"scale"` | [Scale model](/metrical/calibration_models/imu.md#imu-model-descriptions) | | `"scale_shear"` | [Scale and shear model](/metrical/calibration_models/imu.md#imu-model-descriptions) | | `"scale_shear_rotation"` | [Scale, shear and rotation model](/metrical/calibration_models/imu.md#imu-model-descriptions) | | `"scale_shear_rotation_g_sensitivity"` | [Scale, shear, rotation and g-sensitivity model](/metrical/calibration_models/imu.md#imu-model-descriptions) | *** ### Local Navigation Systems (LNS)[​](#local-navigation-systems-lns "Direct link to Local Navigation Systems (LNS)") Local Navigation Systems are represented by odometry messages, the origin of which is located relative to the robot or system recording. This is different from Global Navigation Systems, which are based on a "global" coordinate frame away from the robot. As such, Local Navigation System components are purely extrinsic in nature. #### Modeling[​](#modeling-3 "Direct link to Modeling") | Model Name | Description | | ---------- | -------------------------------------------------- | | `"lns"` | Extrinsics calibration of LNS to other modalities. | --- # Constraints A ***constraint*** is a spatial relation, temporal relation, or formation between any two components. In the context of interpreting plexes as a multigraph of relationships over our system (comprised of different components), constraints are the edge-relationships of the graph. ![Constraint Types](/assets/images/constraint_types-211a14722bcde540b0873dd248486834.png) ## Conventions[​](#conventions "Direct link to Conventions") ### "To" and "From"[​](#to-and-from "Direct link to \"To\" and \"From\"") Constraints are always across two components. MetriCal will often refer to each of these components in a directional sense using the "from" and "to" specifiers, which reference the UUIDs of the components. This directional specifier is useful for spatial and temporal constraints, because it allows us to know how the extrinsics from the spatial constraints, or the synchronization data from the temporal constraints can be applied to data to put two observations into the same frame of reference. Our extrinsics type is essentially a way to describe how to transform points in one coordinate system into another. Anyone who has ever worked with transforms has experienced confusion in convention. In order to cut through the ambiguity of extrinsics, every spatial constraint has a `from` and `to` field. Let's dive into how this works. We can think of an extrinsics transform between components A and B using the following notation: ΓBA​:=Γfrom Bto A​ If we wanted to move a point p from the frame of reference of component B to that of A, we would use the following math: p​WA​=ΓBA​⋅p​WB​ ...also read as "pA​ equals pB​ by transforming to A from B". Thus, when constructing a spatial constraint, **the reference frame for the extrinsics transform is in the coordinate frame of component A**, and would move points from the coordinate frame of component B. Similar examples can be made for converting timestamps from e.g. B into the same "clock" as A using temporal constraints. ### Coordinate Bases in MetriCal[​](#coordinate-bases-in-metrical "Direct link to Coordinate Bases in MetriCal") It's common to represent a transform in a certain convention, such as FLU (X-Forward, Y-Left, Z-Up) or RDF (X-Right, Y-Down, Z-Forward). One might then wonder what the default coordinate system is for MetriCal. Short answer: it entirely depends on the data you're working with. In MetriCal, spatial constraints are designed to transform *observations* (not components!) from the `from` frame to the `to` frame. In the case of a camera-LiDAR extrinsics transform ΓLC​, the solved extrinsics will move LiDAR observations pL​ (which may be in FLU) to a camera observations's coordinate frame (which may be in RDF): pC​=ΓLC​⋅pL​ This makes it simple to move observations from one component to another for sensor fusion tasks. This is what is meant when MetriCal is said to have no "default" component coordinate system; it operates directly on the provided observations! ## Spatial Constraints[​](#spatial-constraints "Direct link to Spatial Constraints") It is common to ask for the spatial relationship or extrinsics between two given components. A Plex incorporates this information in the form of what is called *spatial constraints*. A spatial constraint can be broken down into: | Field | Type | Description | | ---------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | Extrinsics | An extrinsics object | The extrinsics describing the "*To*" from "*From*" transformation. | | Covariance | A matrix of floats | The 6×6 covariance of the extrinsics described by this constraint. | | From | UUID | The UUID of the component that describes the "*From*" or base coordinate frame. | | To | UUID | The UUID of the component that describes the "*To*" coordinate frame, which we are transforming into. This can be considered the "origin" of the extrinsics matrix | For a single-camera system, a plex is a simple affair. For a complicated multi-component system, plexes can become incredibly complex and difficult to parse. Unlike other calibration systems, MetriCal creates fully connected graphs whenever it can. Everything is related! ![A perfectly reasonable plex](/assets/images/shape_plex-695860bea6def4a922127eff146cdd04.png) ### Spatial Covariance[​](#spatial-covariance "Direct link to Spatial Covariance") Spatial covariance is generally presented as a 6×6 matrix relating the variance-covariance of an se3 lie group: \[v1​​v2​​v3​​ω1​​ω2​​ω3​​] When traversing for spatial constraints within the Plex, the constraint returned will always contain the extrinsic with the *minimum overall covariance*. This ensures that users will always get the extrinsic that has the smallest covariance (thus, the highest confidence / precision), even if multiple spatial constraints exist between any two components. ![Covariances of the plex\'s spatial constraints](/assets/images/shape_covariance-ee2d1c8fad41e41cd8fdf8d282c7a0f7.png) Spatial Constraint Traversal - Python Example Since all spatial constraints in a plex form an undirected (maybe even fully connected) graph, it can be confusing to figure out how to traverse that graph to find the "best" extrinsics between two components. MetriCal itself provides the [Shape](/metrical/commands/calibration/shape/shape_overview.md) command to help with this (with a few helpful options in the [Report](/metrical/commands/cli_utilities/report.md) command), but sometimes it's useful to do your own thing. To help would-be derivers, the [spatial\_constraint\_traversal](https://gitlab.com/tangram-vision/oss/spatial_constraint_traversal) repository demonstrates the right way to derive optimal extrinsics straight from the Plex JSON via the magic of python. ## Temporal Constraints[​](#temporal-constraints "Direct link to Temporal Constraints") Time is a tricky thing in perception, but of crucial importance to get right. We've developed our temporal constraint to be flexible enough to describe many of the most common timing scenarios between components. | Field | Type | Description | | --------------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------- | | Synchronization | A synchronization object | The strategy to achieve known synchronization between these two components in the Plex. | | Resolution | float | The resolution to which synchronization should be applied. | | From | UUID | The UUID of the component that the synchronization strategy must be applied to. | | To | UUID | The UUID of the component whose clock we synchronize into by applying our synchronization strategy (to the `from` component). | ### The Problem With Clocks[​](#the-problem-with-clocks "Direct link to The Problem With Clocks") In the world of hardware, measuring time can be a challenge. Two clocks might differ in several different ways; without taking these nuances into account, many higher-level perception tasks can fail. Let's take the example below: two different clocks, possibly from two different hosts, that might be informing separate components in our plex. ![Two clocks out of sync](/assets/images/time_const_intro-49859706a6dd80d4f31c5b7cbd6b7f7f.png) Temporal constraints can balance these different clocks across a plex in order to make sure time confusion never occurs. It achieves this through ***Synchronization***. ### Synchronization[​](#synchronization "Direct link to Synchronization") Synchronization describes the following relationship between two clocks: Cto​=(1e9+skew)⋅Cfrom​+offset | Field | Type | Description | | ------ | ------- | --------------------------------------------------------------------- | | offset | Integer | The epoch offset between two clocks, in units of integer nanoseconds. | | skew | Integer | The scale offset between two clocks. Unitless. | #### Offset[​](#offset "Direct link to Offset") Unless two components are using the same clock, there's a chance that they are offset in time. This means that time *t* in one clock does not align with time *t* in the other. Fixing this is rather simple: just shift the time values in the `from` clock by the `offset` parameter until their two *t* times match. ![Applying offset to a clock](/assets/images/time_const_offset-9714fb97fbee94e31b853c1448da7f60.png) #### Skew[​](#skew "Direct link to Skew") Skew compensates for the difference in the duration of a time increment between two clocks. In other words, a second in one clock might be a different length than a second in another! These differences can be very subtle, but they will result in some unwanted drift. Applying `skew` to a `from` clock's timestamps will match the duration of a second to that of the `to` clock. ![Applying skew to a clock](/assets/images/time_const_skew-323f3c9e0b0dbf4682327f80fd36083a.png) Between `skew` and `offset`, we have the tools we need to synchronize time between two clocks! Note that components that use the same host clock will need no synchronization; their `skew` and `offset` remains `0.0`. Further Reading MetriCal has adopted the terminology from [this paper](//www.iol.unh.edu/sites/default/files/knowledgebase/1588/clock_synchronization_terminology.pdf) from the University of New Hampshire's InterOperability Laboratory. ### Resolution[​](#resolution "Direct link to Resolution") ***Resolution*** helps MetriCal identify observations that are meant to be synchronized between two components. Say we have two camera components. The first is producing one image every 5 seconds; the second produces a new image every 1.3 seconds. We want to pair up observations from the two separate streams that we know are uniquely synced in time as a one-to-one set. Our resolution tells the Platform how far from an observation we want to search for a synced pair. In the case of our first camera, we know that one new frame comes every 5 seconds. This means that there's a span of 2.5 seconds on either side of this image that could hold a matching observation from our second camera. So, we set `resolution` to `2.5 * 1e9` (for nanoseconds). The Platform will then look for any synced observation candidates in camera two and find the observation that matches most closely in time to the image in camera one. All that being said, resolution is a concept better shown than told: ![Applying resolution to an observation series](/assets/images/time_const_resolution-b8cbb2b742bbdb842bb95dd6981318b4.png) If one is confident that two observation streams are in-sync, one may set the resolution to be fairly small. However, given the way some components can behave, it's generally not necessary or recommended. ## Formations[​](#formations "Direct link to Formations") Formations are a bit different from spatial and temporal constraints. While spatial and temporal constraints exist to model the physical realities of components in a system, formations exist to model relationships in the system that don't fall within those boundaries. Formations are defined by the following fields: | Field | Type | Description | | ---------- | --------------------------- | --------------------------------------------------------------------------------------------------------------------- | | Components | An array of component UUIDs | A set of unique UUIDs (corresponding to components that exist within the plex) that are grouped under this formation. | | Name | String | The name for these formations. Usually indicates the purpose or function of the group. | | UUID | UUID | The unique identifier for the formation. | Formations can label subplexes within a plex, label individual OEM pieces of hardware (e.g. labeling a single Intel RealSense device), or group together components that share some common function (e.g. grouping a stereo pair of cameras together). Some formations can be used by MetriCal to generate unique kinds of metrics after calibration. These are identified purely through the formation's `name` field. For example, MetriCal currently understands the following formation kinds: * `"stereo_pair"`: This denotes that two cameras are part of a stereo pair. This also tells MetriCal to compute stereo rectification metrics after a calibration is complete. --- # Covariance Both components and constraints include ***covariance***, a measure of uncertainty in a Plex. The inclusion of covariance as a core idea is one of the biggest differentiators of MetriCal from competing calibration software. ## The Role Of Covariance[​](#the-role-of-covariance "Direct link to The Role Of Covariance") Many calibration pipelines will have some notion of whether or not some parameter is "fixed" or "variable." Fixed parameters are treated as being *perfectly observed*: * Their values are known * They have no error * They are not optimized during a calibration Variable parameters are the opposite by being *perfectly unobserved*: * Their values are not known at all * They may have any amount of error * They are always optimized during a calibration This creates a dichotomy where we either have zero (0%) information about some quantity, or we have perfect information (100%) about some quantity. This is simply not true; most of the time, there is a reasonable and quantifiable amount of uncertainty in any given system. As you might have guessed, ***covariance*** provides a method of modeling this uncertainty. ## Describing Covariance[​](#describing-covariance "Direct link to Describing Covariance") While we may not know the *exact* values of every parameter in our calibration, we can typically make an educated guess. It is common practice to state what we know about a parameter like this: **`` is 1000.0 ``, ± 0.010 ``**. This ± 0.010 tolerance gives us a way to initialize a parameter's ***variance-covariance*** (shortened to just *covariance*). Many of MetriCal's processes take this information into account as a statistical prior. Rather than "fixing" any of our quantities, we update these values through the optimization process. This guarantees that we will never converge to values with variance / precision that is worse than what is specified by our priors. warning When using covariance, know that this value is standard deviation *squared*. In our example above, if our standard deviation is ± 0.010 units, then our covariance is (± 0.010)2 units2. MetriCal incorporates the concept of covariance for all observable quantities in our calibration process. This does add some complexity to the system, but provides the benefit of statistical rigor in our calibration pipeline. --- # Object Space An ***Object Space*** refers to a known set of features in the environment that is observed in our calibration data. This often takes the form of a target marker, a calibration target, or similar mechanism. This is one of the main user inputs to MetriCal, along with the system's [Plex](/metrical/core_concepts/plex_overview.md) and [calibration dataset](/metrical/configuration/data_formats.md). Constructing Your Object Space There are premade object spaces available for common calibration targets in the targets documentation, both for [cameras](/metrical/targets/camera_targets.md) and [lidar](/metrical/targets/lidar_targets.md). If those aren't enough, check out our [premade target](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) library and the target selection wizard. These will automatically account for any gotchas and provide you with both valid targets and a well-formed object space file. One of the most difficult problems in calibration is that of cross-modality data correlation. A camera is fundamentally 2D, while Lidar is 3D. How do we bridge the gap? By using the right object space! Many seasoned calibrators are familiar with the checkerboards and grids that are used for camera calibration; these are object spaces as well. Object spaces as used by MetriCal help define the parameters needed to accurately and precisely detect and manipulate features seen across modalities in the environment. This section serves as a reference for the different kinds of object spaces, detectors, and features supported by MetriCal; and perhaps more importantly, how to combine these object spaces. ## Diving Into The Unknown(s)[​](#diving-into-the-unknowns "Direct link to Diving Into The Unknown(s)") Calibration processes often use external sources of knowledge to learn values that a component (a camera, for instance) couldn't derive on its own. Continuing the example with cameras as our component of reference — there's no sense of scale in a photograph. A photo of a mountain could be larger than life, or it could be a diorama; the camera has no way of knowing, and the image unto itself has no way to communicate the metric scale of what it contains. This is where object spaces come into play. If we place a target with known metric properties in the image, we now have a reference for metric space. ![Target in object space](/assets/images/object_space-16b273aad98aee5c8e4675512771cea6.png) Most component types require some target field like this for proper calibration. For LiDAR we use the a [circular target](/assets/files/lidar_circle_target-a6b529718233d9272cc14e53ea0b886a.pdf) comprising a checkerboard and some retroreflective tape; Similarly, for cameras MetriCal supports a whole host of different checkerboards and signalized markers. Each of these targets is referred to as an object space. ## Object Spaces are Optimized[​](#object-spaces-are-optimized "Direct link to Object Spaces are Optimized") In MetriCal, even object space points have covariance! This reflects the imperfection of real life; even the sturdiest target can warp and bend, which will create uncertainty. We embed this possibility in the covariance value of each object space point. That way, you can have greater certainty of the results, even if your target is not the perfectly "flat" or idealized geometric abstraction that is often assumed in calibration software. One of the most unique capabilities of MetriCal is the ability to optimize object space. With MetriCal, it is possible to calibrate with boards that are imperfect without inducing projective compensation errors back into your [final results](/metrical/results/report.md). ### Multi-Target Calibrations? No Problem.[​](#multi-target-calibrations-no-problem "Direct link to Multi-Target Calibrations? No Problem.") In addition to MetriCal's ability to optimize the object space, MetriCal can also optimize across multiple object spaces. By specifying multiple targets in a scene, it becomes possible to calibrate complex scenarios that wouldn't otherwise be feasible. For example, MetriCal can optimize an extrinsic between two cameras with zero overlap if multiple object spaces are used. We can even go a step further and [consolidate](/metrical/commands/calibration/consolidate.md) object space points that are observed by multiple sensors into one large feature, improving the overall accuracy of the calibration. Check out our guide on [narrow field-of-view camera calibration](/metrical/calibration_guides/narrow_fov_cal.md) to see how object space optimization and consolidation can help improve calibration results. ## Object Space Schema[​](#object-space-schema "Direct link to Object Space Schema") [⬇️Download Object Space JSON Schema](/_schemas/object_space_schema.json) Like our [plex](/metrical/core_concepts/plex_overview.md) structure, object spaces are serialized as JSON objects or files and are passed into MetriCal for many of its different modes. Loading .... --- # Plex MetriCal uses what is called a **Plex** as a description of the spatial relations, temporal relations, and formations within your perception system. In short, a Plex is a representation of the physical system that you are calibrating. It is a graph of [***components***](/metrical/core_concepts/components.md) and [***constraints***](/metrical/core_concepts/constraints.md) that fully describe the perception system being optimized by the Tangram Vision Platform. ![Simple Plex](/assets/images/simple_plex-88efc8a03c3dc4516ad2a47a4ba625bd.png) Systems and Plexes have a *one-to-one relationship*. This means that a Plex can only describe one perception system, and a system should be fully described by one Plex. For example, the above plex description could be two cameras: ![A small system](/assets/images/build_plex_system-911b22626b10a87012bd60f45d9265ab.png) ## Why Plex?[​](#why-plex "Direct link to Why Plex?") A common occurrence in the perception industry is that the words *sensor* and *device* are often ambiguous and used interchangeably. This can often lead to confusion, which is the last thing one wants in a complicated perception system. To that end, we don't use these terms at all! "Plex" is sufficient. Here's a quick example: While products like an Intel RealSense could be referred to as a single "device" or "sensor", they're often a combination of several constituent components, such as: * Two infrared cameras * A color camera * An accelerometer and gyroscope (6-DoF IMU) * A "depth" stream, synthesized from observations of the two infrared cameras Each of these components are represented independently in the Plex, with relations denoted separately as well. ![A complex sensor product in Plex form](/assets/images/hifi_plex-c897e4c56d0aafac4592588bf27137ba.png) We can even add second new module to the Plex with very little effort. Just create the components and constraints for the second module! ![Two complex sensor products connected into a single Plex](/assets/images/two_hifi_plex-e9cef737a648f9494bf9d2a52abcd05f.png) This formulation helps prevent ambiguity within MetriCal. For example, one might ask for "the extrinsics between two products". This is not a useful question, since there are many ways to define what that extrinsic may be dependent upon an assumed convention. Instead, with a plex we can ask for the "extrinsics between color camera with UUID X and color camera with UUID Y", which is specific and unambiguous. ![Traversing a single large Plex](/assets/images/plex_traversal-38cafbf0921d4eab12dce033c6de2f86.png) ## How is the Plex used?[​](#how-is-the-plex-used "Direct link to How is the Plex used?") Plexes are used by MetriCal as a representation of your system. For this reason, plexes are both input into MetriCal during calibration, but also generated by MetriCal as a calibration result. By using the plex, we can gauge the following prior to a calibration: * Initial intrinsic values, as well as their prior (co)variances * Initial extrinsic values, as well as their prior (co)variances * The temporal relationships between components in the system, which informs how MetriCal estimates "synchronized" observations (used to infer spatial relationships) * Formations between components, e.g. a pair of cameras comprising a stereo pair This makes the plex very flexible, in that it can be used to configure and describe parts of the calibration problem as a function of the spatial relations, temporal relations, and formations present within your system. It is a declarative means to describing how components are related. Likewise, MetriCal will also use the exact same plex format as part of the output to calibration (e.g. when you call `metrical calibrate`). From the output plex, we can gauge the following: * Final calibrated intrinsics values, as well as their posterior covariances * Final calibrated extrinsics values, as well as their posterior covariances The plex is a convenient means to serialize, copy, share, and parse known information about a system. Many of MetriCal's features directly produce or consume a plex. ## Subplexes[​](#subplexes "Direct link to Subplexes") MetriCal always interprets a Plex at face value — we never assume to know your system better than you do. However, this can lead to some awkward situations when trying to profile your system. Here's the same sensor product that we've been looking at with only a few constraints added: ![Complex sensor product as subplex](/assets/images/subplexes-db28f5f0510146f3222a4339feef3aa0.png) Instead of one fully connected Plex, we have one Plex made of many *subplexes*. This can be interpreted by MetriCal a few ways: 1. The missing constraints between e.g. color and infrared cameras cannot be known. 2. The missing constraints between e.g. color and infrared cameras are unimportant / should not be calibrated ever. 3. The missing constraints between e.g. color and infrared cameras were not added for some other reason (e.g. they were forgotten, sync was not working, etc.). In any case, this will inform MetriCal's behaviour. Calibration, for instance, will look to fill in all possible constraints between components. As a conseuqence, if two components do not have any possible spatial constraint path between them, then MetriCal will not make attempts to infer a relative extrinsic or spatial constraint between them in the output. ## Plex Conversion[​](#plex-conversion "Direct link to Plex Conversion") While plexes are a convenient format when working with MetriCal, other software may not support Plexes as a format. In these instances, it can be useful to convert parts or the whole plex into a more usable format. Some examples can be: * Converting the intrinsics and distortion parameters for a given camera into a look-up table for fast image rectification. * Converting the intrinsics and distortion parameters for a given stereo pair into a look-up table for fast stereo rectification. * Converting a plex into a [URDF](https://docs.ros.org/en/humble/Tutorials/Intermediate/URDF/URDF-Main.html) for use with ROS / ROS2 tooling. See the [`metrical shape`](/metrical/commands/calibration/shape/shape_overview.md) docs for more information on how to extract or convert information from the plex. ## Plex Schema[​](#plex-schema "Direct link to Plex Schema") Don't write a plex by hand! Plex files are complex and can be difficult to write by hand. Instead, use the [Init command](/metrical/commands/calibration/init.md)! This command will generate a plex for you based on your system data, and you can modify it as needed afterwards. [⬇️Download Plex JSON Schema](/_schemas/plex_schema.json) Plex data is serialized as JSON for the convenience of being able to store and edit the plex in a text-based format. This can bring about some complications: while JSON is a plaintext format that is well-suited to manipulation (and to some extent, reading), it is loosely defined and the internal representation of what is inside a JSON file or object is not always easy to understand. Loading .... --- # Projective Compensation Calibration can be described as: > "*A statistical optimization process that aims to solve component model, constraints, and object space collectively.*" This means that we have many parameters to solve for, many different observations, and an expectation of statistical soundness. In an ideal world, all of our observations would be *independent* and *non-correlated*, i.e. one parameter's value wouldn't affect other parameters. However, this is rarely the case. Many of our component's parameters will be correlated with both observations and other parameters. This also means that the *errors* are affected between correlated parameters. When errors in the determination of one parameter cause errors in the determination of a different parameter, we call that **projective compensation**. Errors in the first parameter are being *compensated* for by *projecting* the error into the second. Projective compensation can happen as a result of: 1. Poor choice in model. If a parameter chosen to model your components conflates many other parameters together, then these parameters cannot be optimized in a statistically independent manner. 2. Poor data capture. Because the calibration process reconstructs the "object space" from component observations, the data collection process influences how the calibration parameters are determined through the optimization process. It's hard to directly measure projective compensation in an optimization with the statistical tools we have today. It is possible to use output parameter covariance to indirectly observe when projective compensation happens. Furthermore, depending on what parameters are correlated, we can often inform the modeling or calibration process to reduce this effect. Despite it being possible to detect when projective compensation is occurring (and it is almost always occurring to some degree or another), it is more useful to understand the different results and [metrics](/metrical/results/output_file.md) that MetriCal outputs and how those can be used to signal what kinds of projective compensation may be present. --- # Welcome to MetriCal ![MetriCal banner](/assets/images/banner_metrical-fec4cd5861b6e349e14755aa6bd18254.png) ## Introduction[​](#introduction "Direct link to Introduction") MetriCal delivers accurate, precise, and expedient calibration results for multimodal sensor suites. Its easy-to-use interface and detailed metrics enable enterprise-level autonomy at scale. ## What is Sensor Calibration?[​](#what-is-sensor-calibration "Direct link to What is Sensor Calibration?") Sensor calibration is the process of determining the parameters that define how a sensor captures data from the world. This includes: * Intrinsic parameters: Properties internal to the sensor itself (like a camera's focal length or distortion) * Extrinsic parameters: The position and orientation of sensors relative to each other or a reference frame Good calibration ensures that: * Measurements from a single sensor are accurate * Data from multiple sensors can be properly aligned and fused * Perception algorithms receive consistent, accurate input data ## Why MetriCal?[​](#why-metrical "Direct link to Why MetriCal?") MetriCal provides a comprehensive solution for calibrating various sensor configurations. Unlike many calibration tools that focus on single sensors or specific combinations, MetriCal offers: * Support for single cameras, multi-camera arrays, LiDAR sensors, IMU sensors, and local navigation systems * Joint optimization of intrinsic and extrinsic parameters * Detailed uncertainty metrics to assess calibration quality * Flexible workflows for different calibration scenarios * Visualization tools to inspect calibration results * Process MCAP files and folder datasets * Convert a calibration file into a URDF file for easy integration into ROS * Use a variety of calibration targets * Create pixel-wise lookup tables for both single camera correction and stereo pair rectification ...and so much more! ## Getting Started[​](#getting-started "Direct link to Getting Started") [![MetriCal Configuration](/_assets/metrical_config.jpg)](/metrical/configuration/installation.md) ### [MetriCal Configuration](/metrical/configuration/installation.md) [Installation, licensing, visualization, and data inputs](/metrical/configuration/installation.md) [View Guide →](/metrical/configuration/installation.md) [![Targets](/_assets/combining_boards.png)](/metrical/targets/target_overview.md) ### [Targets](/metrical/targets/target_overview.md) [Compatible targets, usage, and construction](/metrical/targets/target_overview.md) [View Guide →](/metrical/targets/target_overview.md) [![Intrinsics Models](/_assets/intrinsics_models.jpg)](/metrical/calibration_models/cameras.md) ### [Intrinsics Models](/metrical/calibration_models/cameras.md) [A full breakdown of supported calibration models](/metrical/calibration_models/cameras.md) [View Guide →](/metrical/calibration_models/cameras.md) [![MetriCal Commands](/_assets/metrical_commands.jpg)](/metrical/commands/commands_overview.md) ### [MetriCal Commands](/metrical/commands/commands_overview.md) [Understanding commands and manifests](/metrical/commands/commands_overview.md) [View Guide →](/metrical/commands/commands_overview.md) [![Understanding Results](/_assets/understanding_results.jpg)](/metrical/results/report.md) ### [Understanding Results](/metrical/results/report.md) [Understanding reports and dissecting outputs](/metrical/results/report.md) [View Guide →](/metrical/results/report.md) ## Calibration Guides and Tutorials[​](#calibration-guides-and-tutorials "Direct link to Calibration Guides and Tutorials") [![Calibration Guides](/_assets/calibration_guide.jpg)](/metrical/calibration_guides/guide_overview.md) ### [Calibration Guides](/metrical/calibration_guides/guide_overview.md) [Step-by-step calibration instructions for various sensor setups](/metrical/calibration_guides/guide_overview.md) [View Guide →](/metrical/calibration_guides/guide_overview.md) [![Special Topics](/_assets/special_topic.jpg)](/metrical/special_topics/kalibr_to_metrical_migration.md) ### [Special Topics](/metrical/special_topics/kalibr_to_metrical_migration.md) [Project-specific considerations and guides](/metrical/special_topics/kalibr_to_metrical_migration.md) [View Guide →](/metrical/special_topics/kalibr_to_metrical_migration.md) ## Core Concepts[​](#core-concepts "Direct link to Core Concepts") [![Core Concepts](/_assets/core_concepts.jpg)](/metrical/core_concepts/plex_overview.md) ### [Core Concepts](/metrical/core_concepts/plex_overview.md) [Key concepts behind MetriCal's design](/metrical/core_concepts/plex_overview.md) [View Guide →](/metrical/core_concepts/plex_overview.md) ## Support and Legal[​](#support-and-legal "Direct link to Support and Legal") [![Support and Administration](/_assets/support_admin.jpg)](/metrical/support_and_admin/billing.md) ### [Support and Administration](/metrical/support_and_admin/billing.md) [Licensing, billing, and contact](/metrical/support_and_admin/billing.md) [View Guide →](/metrical/support_and_admin/billing.md) [![Changelog and Migration Guides](/_assets/migration.jpg)](/metrical/changelog.md) ### [Changelog and Migration Guides](/metrical/changelog.md) [Updates and version information](/metrical/changelog.md) [View Guide →](/metrical/changelog.md) ## Using an LLM?[​](#using-an-llm "Direct link to Using an LLM?") Point ChatGPT/Claude/Gemini/etc at our [llms.txt](https://docs.tangramvision.com/llms.txt) or [llms-full.txt](https://docs.tangramvision.com/llms-full.txt) files to give the LLM an easy way to ingest MetriCal documentation in markdown format. You can then ask the LLM to find information in the docs, explain concepts and tradeoffs, assist with configuration and troubleshooting, and so on. warning We only provide `llms.txt` content and associated markdown files for the latest stable version of MetriCal! If you need support for older versions of MetriCal or if you'd like to see more support for LLMs, please [contact us](/metrical/support_and_admin/contact.md). --- # MetriCal Results MCAP Every calibration outputs a results MCAP file. By default, this file is named `results.mcap`. This file contains: * Metadata about the software that generated it * The input plex and object-space passed to the calibration * The optimized plex representing the calibrated system * The optimized object space (with any updated spatial constraints for a given object space) * Metrics derived over the dataset that was calibrated. These are split into three categories: 1. Pre-calibration metrics, computed from the data ingestion phase of the software 2. Residual metrics for each cost relationship constructed as part of the calibration 3. Summary statistics for the adjustment, computed from the residual metrics The total sum of this data exists in the MCAP in different places. Generally speaking, an MCAP file stores data in one of three different kinds of records within the format: 1. Metadata 2. Channel Messages 3. Attachments We break down how the above information is organized across these three "sections" of an MCAP below. See [the MCAP specification](https://mcap.dev/spec) for more information on the MCAP format. ## Results MCAP Structure[​](#results-mcap-structure "Direct link to Results MCAP Structure") ### Metadata[​](#metadata "Direct link to Metadata") MetriCal writes the following metadata out to every `results.mcap`: * **Command**: The program used to generate the MCAP (this should typically be `"metrical"`) * **Version**: The version of MetriCal used to perform calibration * **Arguments**: The command line arguments used during calibration These can be extracted from the MCAP as a JSON object using the [`mcap` CLI tool](https://mcap.dev/guides/cli): ``` mcap get metadata --name metrical results.mcap ``` ### Channel Messages[​](#channel-messages "Direct link to Channel Messages") Channel messages in the `results.mcap` can be broken into pre-calibration, residual, and summary metrics. These messages are not written to the results MCAP on an unsuccessful calibration unless the `--override-diagnostics` flag is provided. #### Pre-Calibration Metrics[​](#pre-calibration-metrics "Direct link to Pre-Calibration Metrics") Pre-Calibration metrics are generated after MetriCal has performed data ingestion, detection, and run the various quality and motion filters associated with the input dataset. These metrics are listed below: | **Pre-Calibration Metric** | | ----------------------------------------------------------------------------------------------- | | [Topic Filter Statistics](/metrical/results/pre_calibration_metrics/topic_filter_statistics.md) | | [Binned Feature Counts](/metrical/results/pre_calibration_metrics/binned_feature_counts.md) | | [Circle Coverage](/metrical/results/pre_calibration_metrics/circle_coverage.md) | #### Residual Metrics[​](#residual-metrics "Direct link to Residual Metrics") Residual metrics are generated for each and every cost or observation added to the calibration. The most immediately familiar residual metric might be reprojection error, but similar metrics can be derived for other modalities and observations as well. A full list of these is linked below: | **Residual Metric** | **Produced by** | | ------------------------------------------------------------------------------------------------------------ | ------------------------------- | | [Circle Edge Misalignment](/metrical/results/residual_metrics/circle_edge_misalignment.md) | All Camera-LiDAR pairs | | [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md) | All Camera-LiDAR pairs | | [Composed Relative Extrinsics Error](/metrical/results/residual_metrics/composed_relative_extrinsics.md) | All Components and Object pairs | | [Differenced Pose Trajectory Error](/metrical/results/residual_metrics/differenced_pose_trajectory_error.md) | Local Navigation Systems | | [IMU Preintegration Error](/metrical/results/residual_metrics/imu_preintegration_error.md) | All IMUs | | [Image Reprojection](/metrical/results/residual_metrics/image_reprojection.md) | All Cameras | | [Interior Points to Plane Error](/metrical/results/residual_metrics/interior_points_to_plane_error.md) | All Camera-LiDAR pairs | | [Object Inertial Extrinsics Error](/metrical/results/residual_metrics/object_inertial_extrinsic_error.md) | All IMUs | | [Paired 3D Point Error](/metrical/results/residual_metrics/paired_3d_point_error.md) | All LiDAR-LiDAR pairs | #### Summary Statistics[​](#summary-statistics "Direct link to Summary Statistics") Of all the metrics output in a `results.mcap` file, the Summary Statistics for a calibration run the most risk of being misinterpreted. Always bear in mind that these figures represent broad, global mathematical strokes, and should be interpreted holistically along with the rest of the metrics of a calibration. All of the summary metrics are generated downstream of the residual metrics; Consequently, if you have interest in generating different summary metrics one should first look at the residual metrics as they contain the raw data that these are generated downstream from. | **Summary Statistic** | **Produced** | | --------------------- | --------------------- | | Overall | Every calibration | | Camera | Per Camera | | Camera-LiDAR | Per Camera-LiDAR pair | | LiDAR-LiDAR | Per LiDAR-LiDAR pair | | IMU | Per IMU | These are the same summary statistics that are output in the [console logs](/metrical/results/report.md#summary-statistics-charts-ss). #### Message Schema Definitions[​](#message-schema-definitions "Direct link to Message Schema Definitions") See our [tangram-protobuf-messages](https://gitlab.com/tangram-vision/oss/tangram-protobuf-messages) repo for the exact message schemas we support for all of our channel message metrics. ### Attachments[​](#attachments "Direct link to Attachments") #### Input Plex[​](#input-plex "Direct link to Input Plex") This is the plex that was used during calibration to initialize the priors of the system. This plex is **not refined**, notably in that it usually denotes the beginning state of your system prior to calibration. The input plex can be extracted from the `results.mcap` by using the [`mcap` CLI tool](https://mcap.dev/guides/cli): ``` mcap get attachment --name input-plex results.mcap > input-plex.json ``` This can often be helpful if you want to: 1. Debug what the state of the system was when calibration was invoked; OR 2. Re-use the input plex from a particular run for a subsequent calibration #### Optimized Plex[​](#optimized-plex "Direct link to Optimized Plex") The optimized Plex is a description of the now-calibrated System. This Plex is typically more "complete" and information-rich than the input Plex, since it is based off of the real data used to calibrate the System. The optimized plex can be extracted from the `results.mcap` by using the [`mcap` CLI tool](https://mcap.dev/guides/cli): ``` mcap get attachment --name optimized-plex results.mcap > optimized-plex.json ``` #### Input Object-Space[​](#input-object-space "Direct link to Input Object-Space") This is the object-space that was used during calibration to initialize the priors of the object-space definition. This object-space is **not refined**, notably that it defines the state of the objects / targets in the scene prior to optimization. The input object-space can be extracted from the `results.mcap` by using the [`mcap` CLI tool](https://mcap.dev/guides/cli): ``` mcap get attachment --name input-object-space results.mcap > input-object-space.json ``` This can often be helpful if you want to: 1. Debug which object-space was used for a particular calibration 2. Better understand downstream effects on the optimized object-space or how error was pushed into the object-space during calibration #### Optimized Object-Space[​](#optimized-object-space "Direct link to Optimized Object-Space") MetriCal will optimize over the object spaces used in every calibration. For example, If your object space consists of a checkerboard, MetriCal will directly estimate how flat (or not) the checkerboard actually is using the calibration data. Additionally, the optimized object-space will contain optimized spatial constraints, which can be an easy check to validate whether the scene geometry you observed was correctly reconstructed. The purpose of this is to be used with modes like [`consolidate-object-spaces`](/metrical/commands/calibration/consolidate.md) or simply re-using your optimized object-space in a subsequent [`calibrate` mode](/metrical/commands/calibration/calibrate.md) command. You can extract the object-space from the `results.mcap` by using the [`mcap` CLI tool](https://mcap.dev/guides/cli): ``` mcap get attachment --name optimized-object-space results.mcap > optimized-object.json ``` #### Plex and Object-Space Schemas[​](#plex-and-object-space-schemas "Direct link to Plex and Object-Space Schemas") See the individual pages for [plexes](/metrical/core_concepts/plex_overview.md#plex-schema) and [object-spaces](/metrical/core_concepts/object_space_overview.md#object-space-schema) for more information on the schemata for these types. --- # Binned Feature Counts ## Overview[​](#overview "Direct link to Overview") Binned feature counts are most frequently visible through the [Camera FOV Coverage](/metrical/results/report.md#di-004-camera-fov-coverage) table in the report. These pre-calibration messages comprise: 1. A vector of binned feature counts, represented as a grid in row-major ordering 2. The number of rows of the aforementioned grid 3. The number of columns of the aforementioned grid Each of these are organized by the camera UUID in the input plex and corresponds to that camera's FoV coverage. --- # Circle Coverage ## Overview[​](#overview "Direct link to Overview") The circle coverage metrics are most frequently visible through the [LiDAR Circle Coverage](/metrical/results/report.md#di-005-lidar-circle-coverage) table in the report. These pre-calibration messages comprise: 1. The binned continuous angles of the histogram (bins exist across the range 0 to π in radians). A notable difference here is that each bin is a measure of radians, not degrees, as shown in the corresponding report table DI-005 2. The number of total bins Each of these are organized by LiDAR UUID in the input plex and corresponds to that LiDAR's circular target coverage. --- # Topic Filter Statistics ## Overview[​](#overview "Direct link to Overview") The topic filtering statistics are most frequently visible through the [processed observation count](/metrical/results/report.md#di-003-processed-observation-count) table in the report. These pre-calibration messages comprise: 1. Number of messages on a given channel / topic in the data 2. Number of messages that had successful detections for a given topic 3. Number of messages that had successful detections, AND satisfied MetriCal's quality filter for a given topic 4. Number of messages that had successful detections, AND satisfied MetriCal's quality filter, AND satisfied MetriCal's motion filter for a given topic Each of these filtering statistics are organized by the component UUID in the input plex that corresponds to their given topics. --- # MetriCal Reports When running a calibration with MetriCal, successful runs generate a comprehensive set of charts and diagnostics to help you understand your calibration quality. These are output to your command line interface as full report detailing every run. This documentation explains each report section, what metrics they display, and how to interpret figures to improve your calibration workflow. ## Generating Reports[​](#generating-reports "Direct link to Generating Reports") Reports can be saved to an HTML file for later inspection. This is useful for sharing results with team members or for archiving results in a human-readable format. ### Calibrate Mode[​](#calibrate-mode "Direct link to Calibrate Mode") Use the `--report-path` argument when running MetriCal's [Calibrate command](/metrical/commands/calibration/calibrate.md) to save the CLI report directly to an HTML file. ``` metrical calibrate --report-path "report.html" ... ``` ### Report Mode[​](#report-mode "Direct link to Report Mode") Use [Report command](/metrical/commands/cli_utilities/report.md) to generate a report from a plex or the [output file](/metrical/results/output_file.md) of a previous calibration. ``` metrical report [OPTIONS] ``` ## Color Coding in MetriCal Output[​](#color-coding-in-metrical-output "Direct link to Color Coding in MetriCal Output") MetriCal uses ANSI terminal codes for colorizing output according to an internal assessment of metric quality: * █ **Cyan** : Spectacular (excellent calibration quality) * █ **Green**: Good (solid calibration quality) * █ **Orange**: Okay, but generally poor (may need improvement) * █ **Red**: Bad (likely needs attention) Note that this quality assessment is based on experience with a variety of datasets and may not accurately reflect your specific calibration needs. Use these colors as a general guide, but always consider the context of your calibration setup and goals. ## Chart Sections Overview[​](#chart-sections-overview "Direct link to Chart Sections Overview") MetriCal organizes outputs into six main sections: 1. **Data Inputs** (`DI-*` prefix) - Information about your input data 2. **Camera Modeling** (`CM-*` prefix) - Charts showing how well the camera models fit the data 3. **Extrinsics Info** (`EI-*` prefix) - Metrics on the spatial relationships between components 4. **Calibrated Plex** (`CP-*` prefix) - Results of your calibration 5. **Summary Stats** (`SS-*` prefix) - Overall performance metrics 6. **Data Diagnostics** (`DD-*` prefix) - Warnings and advice about your calibration dataset Let's explore each chart in detail. ## Data Inputs Charts (DI)[​](#data-inputs-charts-di "Direct link to Data Inputs Charts (DI)") These charts provide information about the data you provided to MetriCal. ### DI-001: Calibration Metadata[​](#di-001-calibration-metadata "Direct link to DI-001: Calibration Metadata") Loading... This table displays the basic configuration settings used for your calibration run. Both the input arguments and the actual settings used during optimization are shown for reference. ### DI-002: Object Space Descriptions[​](#di-002-object-space-descriptions "Direct link to DI-002: Object Space Descriptions") Loading... This table describes the calibration targets (object spaces) used in your dataset: * **Type**: The type of target object (e.g., DotMarkers, Charuco, etc.) * **UUID**: The unique identifier of the object * **Detector**: The detector used (e.g., "Dictionary: Aruco7x7\_50") and its description * **Variance**: The expected prior variances of the target descriptor ### DI-003: Processed Observation Count[​](#di-003-processed-observation-count "Direct link to DI-003: Processed Observation Count") Loading... This critical table shows how many observations were processed from your dataset: * **Component**: The sensor component name and UUID * **# read**: Total number of observations read from the dataset * **# with detections**: Number of measurements where the detector identified features * **# after quality filter**: Number of measurements that passed the quality filter * **# after motion filter**: Number of measurements that passed the motion filter If there's a significant drop between any of these columns, it may indicate an issue with your dataset or settings. These instances will be flagged in the diagnostics section. ### DI-004: Camera FOV Coverage[​](#di-004-camera-fov-coverage "Direct link to DI-004: Camera FOV Coverage") Loading... This visual chart shows how well your calibration data covers the field of view (FOV) of each camera: * The chart divides each camera's image into a 10x10 grid * Each grid cell shows how many features were detected in that region * Color coding indicates feature density: * █ **Red**: No features detected * █ **Orange**: 1-15 features detected * █ **Green**: 16-50 features detected * █ **Cyan**: >50 features detected Ideally, you want to see a mostly green/cyan grid with minimal red cells, indicating good coverage across the entire FOV. Poor coverage can lead to inaccurate intrinsics calibration. ### DI-005: LiDAR Circle Coverage[​](#di-005-lidar-circle-coverage "Direct link to DI-005: LiDAR Circle Coverage") Loading... The visual chart shows a histogram of the angular coverage of the LiDAR circle board. In ideal scenarios, it is expected that this histogram should appear as a mostly Gaussian distribution centred close to 360°. This would mean that you covered the entire extent of the retroreflective circular target evenly throughout your capture (as opposed to just one corner of it, where the maximum contiguous angle would be shorter, for example). * **Component**: The sensor component name and UUID * **Max Contiguous Angle Coverage**: A histogram of the maximum contiguous angular arc detected of the retroreflective circle target, with the X-axis in degrees of maximum contiguous detected coverage and the Y-axis counting the frequency of point-cloud messages that contained a detected circle target. ### DI-006: Detection Timeline[​](#di-006-detection-timeline "Direct link to DI-006: Detection Timeline") Loading... This chart visualizes when detections occurred across your dataset timeline: * X-axis represents time in seconds since the first observation * Each row represents a different sensor component * Points indicate timestamps when features were detected * Components are color-coded for easy differentiation * If no detections were found for a given component, then `(N/A)` is printed instead. This helps you visualize how synchronized your sensor data is and identify any gaps in observations, or components that lacked detections at all. If you expect all of your observations to align nicely, but they aren't aligned at all, it's a sign that your timestamps are not being written or read correctly. Either way, this table is a good place to start debugging. ## Camera Modeling Charts (CM)[​](#camera-modeling-charts-cm "Direct link to Camera Modeling Charts (CM)") These charts show how well the selected camera models fit your calibration data. ### CM-001: Binned Reprojection Errors[​](#cm-001-binned-reprojection-errors "Direct link to CM-001: Binned Reprojection Errors") Loading... This heatmap visualizes reprojection errors across the camera's field of view: * The chart shows a 10x10 grid representing the camera FOV * Each cell contains the weighted RMSE (Root Mean Square Error) of reprojection errors in that region * Color coding indicates error magnitude: * █ **Cyan**: < 0.1px error * █ **Green**: 0.1-0.25px error * █ **Orange**: 0.25-1.0px error * █ **Red**: > 1px error or no data Ideally, most cells should be cyan or green. Areas with consistently higher errors (orange/red) may indicate issues with your camera model or lens distortion that isn't being captured correctly. ### CM-002: Stereo Pair Rectification Error[​](#cm-002-stereo-pair-rectification-error "Direct link to CM-002: Stereo Pair Rectification Error") Loading... For multi-camera setups, this chart shows the stereo rectification error between camera pairs. Any cameras that saw the same targets at the same time are added to a stereo pair. * Lists each camera pair combination * Shows various error metrics for stereo rectification * Indicates how well the extrinsic calibration aligns the two cameras Lower RMSE values indicate better stereo calibration. The number of mutual images is the number of images that contained overlapping stereo observations between the specified components pairs, and the number of mutual features indicates the total number of matched points between those overlapping stereo observations. The histogram is generated relative to the number of mutual features. ### CM-003: Max Camera Motion Range[​](#cm-003-max-camera-motion-range "Direct link to CM-003: Max Camera Motion Range") Loading... This table compares the motion of a given camera component relative to each object board in the object-space config. This is computed as the maximum convergence angle both horizontally and vertically of a camera relative to the board. The difference in Z is the depth distance towards/away-from the board. * Lists each camera-board combination * Shows the total range of motion in horizontal & vertical angles (moving laterally around the board), as well as the total depth difference in Z relative to the board. As a general heuristic, MetriCal looks to find approximately 60° of convergent motion relative to the board, and ±0.5m of depth motion towards/away-from the board. In cases where these motion thresholds are not met, a data diagnostic will be issued. Note that this data diagnostic is not always actionable dependent on what kind of camera you are calibrating: For example, a narrow field-of-view camera may not be able to achieve 60° of convergent motion relative to a board purely due to space constraints. Nevertheless, special care should be taken in such circumstances to ensure that the relevant camera model is still observable. ## Extrinsics Info Charts (EI)[​](#extrinsics-info-charts-ei "Direct link to Extrinsics Info Charts (EI)") These charts provide metrics on the spatial relationships between your calibrated components. ### EI-001: Component Extrinsics Errors[​](#ei-001-component-extrinsics-errors "Direct link to EI-001: Component Extrinsics Errors") Loading... This is a complete summary of all component extrinsics errors (as RMSE) between each pair of components, as described by the [Composed Relative Extrinsics metrics](/metrical/results/residual_metrics/composed_relative_extrinsics.md). This table is probably one of the most useful when evaluating the quality of a plex's extrinsics calibration. Note that the extrinsics errors are weighted, which means outliers are taken into account. Lower values indicate more precise extrinsic calibration between components. Rotations are printed as Euler angles, using Extrinsic XYZ convention. * **X, Y, Z (m)**: Translation errors in meters * **Roll, Pitch, Yaw (°)**: Rotation errors in degrees ## Summary Statistics Charts (SS)[​](#summary-statistics-charts-ss "Direct link to Summary Statistics Charts (SS)") These charts provide overall metrics on calibration quality. ### SS-001: Optimization Summary Statistics[​](#ss-001-optimization-summary-statistics "Direct link to SS-001: Optimization Summary Statistics") Loading... This table provides high-level metrics about the optimization process: * **Optimized Object RMSE**: The overall reprojection error across all cameras * **Posterior Variance**: A statistical measure of the calibration uncertainty Lower values indicate a more accurate calibration. ### SS-002: Camera Summary Statistics[​](#ss-002-camera-summary-statistics "Direct link to SS-002: Camera Summary Statistics") Loading... This table summarizes reprojection errors for each camera. Typically, values under 0.5px indicate good calibration, with values under 0.2px being excellent. However, this can vary based on your camera image resolution or camera type. Comparing Camera RMSE If two cameras have pixels of different sizes, then it is important to first convert these RMSEs to some metric size so as to compare them equally. This is what `pixel_pitch` in the [Plex API](/metrical/core_concepts/components.md#camera) is for: cameras can be compared more equally with that in mind, as the pixel size between two cameras is not always equal! ### SS-003: Camera-LiDAR Summary Statistics[​](#ss-003-camera-lidar-summary-statistics "Direct link to SS-003: Camera-LiDAR Summary Statistics") Loading... The Camera-LiDAR Summary Statistics show the Root Mean Square Error (RMSE) of three different types of residual metrics: * [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md), if a camera-lidar pair is co-visible with a lidar circle target. * [Circle Edge Misalignment](/metrical/results/residual_metrics/circle_edge_misalignment.md), if a camera-lidar pair is co-visible with a lidar circle target. * [Interior Points to Plane Error](/metrical/results/residual_metrics/interior_points_to_plane_error.md), if a camera-lidar pair is co-visible with a lidar circle target. For a component that has been appropriately modeled (i.e. there are no un-modeled systematic error sources present), this represents the mean quantity of error from observations taken by a single component. ### SS-004: LiDAR-LiDAR Summary Statistics[​](#ss-004-lidar-lidar-summary-statistics "Direct link to SS-004: LiDAR-LiDAR Summary Statistics") Loading... The LiDAR-LiDAR Summary Statistics show the Root Mean Square Error (RMSE) of one residual metric: * [Paired 3D Point Error](/metrical/results/residual_metrics/paired_3d_point_error.md), if co-visible LiDAR are present Comparing LiDAR RMSE Two LiDAR calibrated simultaneously will have the same RMSE relative to one another. This makes intuitive sense: LiDAR A will have a certain relative error to LiDAR B, but LiDAR B will have that same relative error when compared to LiDAR A. Make sure to take this into account when comparing LiDAR RMSE more generally. ### SS-005: IMU Summary Statistics[​](#ss-005-imu-summary-statistics "Direct link to SS-005: IMU Summary Statistics") Loading... The IMU Summary Statistics show the Root Mean Square Error (RMSE) of one residual metric: * [IMU Preintegration Error](/metrical/results/residual_metrics/imu_preintegration_error.md) ## Data Diagnostics[​](#data-diagnostics "Direct link to Data Diagnostics") Loading... MetriCal performs comprehensive analysis of your calibration data and provides diagnostics to help identify potential issues. These diagnostics are categorized by severity level: ### █ DD-001 High-Risk Diagnostics[​](#-dd-001-high-risk-diagnostics "Direct link to -dd-001-high-risk-diagnostics") Critical issues that likely need to be addressed for reliable calibration: * **Poor Camera Range of Motion**: Camera movement range insufficient for calibration * **No Component to Register Against**: Missing required component for calibration * **Component Missing Compatible Component Type**: Compatible component exists but has no detections * **Motion Filter filtered all component observations**: One of our filters removed all observations, or no observations had any usable detections for a requested topic * **Object Has Large Variances**: Object has excessive variance (> 1e-6) * **Projective Compensation**: MetriCal has detected a common projective compensation in the results * **Poor LiDAR Feature Coverage**: Poor feature coverage in LiDAR circle captures ### █ DD-002 Medium-Risk Diagnostics[​](#-dd-002-medium-risk-diagnostics "Direct link to -dd-002-medium-risk-diagnostics") Issues that should be addressed but may not prevent successful calibration: * **Poor Camera Feature Coverage**: Poor feature coverage in camera FOV (< 75%) * **Two or More Spatial Subplexes**: Multiple unrelated spatial groups detected * **Low Mutual Observations Count**: Camera pair could benefit from more mutual observations * **Component Shares No Sync Groups**: Component has no timestamp overlap with others ### █ DD-003 Low-Risk Diagnostics[​](#-dd-003-low-risk-diagnostics "Direct link to -dd-003-low-risk-diagnostics") Advice that might help improve calibration quality: * **Component Has Many Low Quality Detections**: High proportion of detections discarded due to quality issues ## Calibrated Plex Charts (CP)[​](#calibrated-plex-charts-cp "Direct link to Calibrated Plex Charts (CP)") These tables display the actual calibration results for your sensor system. ### CP-001: Camera Metrics[​](#cp-001-camera-metrics "Direct link to CP-001: Camera Metrics") Loading... This table shows the calibrated intrinsic parameters for each camera. Different models will have different interpretations; see the [Camera Models](/metrical/calibration_models/cameras.md) page for more. * **Camera**: The component name and UUID of each camera * **Frame Specification**: Basic frame specifications for the camera (width, height, pixel pitch) * **Projection Model**: Calibrated projection parameters (focal length, principal point) * **Distortion Model**: Calibrated distortion parameters (varies by model type) The standard deviations (±) indicate the uncertainty of each parameter. ### CP-002: Optimized IMU Metrics[​](#cp-002-optimized-imu-metrics "Direct link to CP-002: Optimized IMU Metrics") Loading... This table presents all IMU metrics derived for every IMU component in a calibration run. The most interesting column for most users is the Intrinsics: scale, shear, rotation, and g sensitivity. ### CP-003: Calibrated Extrinsics[​](#cp-003-calibrated-extrinsics "Direct link to CP-003: Calibrated Extrinsics") Loading... This table represents the [Minimum Spanning Tree](/metrical/commands/calibration/shape/shape_mst.md) of all spatial constraints in the Plex. Note that this table doesn't print *all* spatial constraints in the plex; it just takes the "best" constraints possible that would still preserve the structure. Rotations are printed as Euler angles, using Extrinsic XYZ convention. * **Translation (m)**: The X, Y, Z position of each component relative to the origin * **Diff from input (mm)**: How much the calibration changed from the initial values * **Rotation (°)**: The Roll, Pitch, Yaw rotation of each component in degrees * **Diff from input (°)**: How much the rotation changed from the initial values The table also indicates which "subplex" each component belongs to (components that share spatial relationships). ## Output Summary[​](#output-summary "Direct link to Output Summary") Loading... This is a summary of the output files generated by MetriCal, or instructions on how to access them. ## Conclusion[​](#conclusion "Direct link to Conclusion") Interpreting MetriCal output charts is key to understanding your calibration quality and identifying areas for improvement. By systematically analyzing each section, you can iteratively improve your calibration process to achieve more accurate results. Remember that calibration is both an art and a science—experimental design matters greatly, and the metrics provided by MetriCal help you quantify the quality of your calibration data and results. --- # Circle Edge Misalignment Created by: All LiDAR ## Overview[​](#overview "Direct link to Overview") Circle Edge Misalignment is similar to the [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md) metric. Where the Circle Misalignment cost evaluates the difference between the inferred circle center of the observed ring of retroreflective tape and the known center relative to what a camera observes, the Circle Edge Misalignment instead measures the inferred radius of the observed ring of points in combination with the planar alignment of the circle transform. These are compared to the radius described by the transform in the object-space and expected planar orientation relative to the markerboard (or other target kind). There is one circle edge misalignment metric group for every circular target in use. ## Definition[​](#definition "Direct link to Definition") | Field | Type | Description | | ---------------------------------------- | ------------------------------------ | ---------------------------------------------------------------------------------------------- | | `metadata` | A common metadata object | The metadata associated with the point cloud this circle target was detected in. | | `object_space_id` | UUID | The UUID of the circle target in the object-space that was observed. | | `world_extrinsics` | An array of world extrinsics objects | The world pose (camera from object-space) that corerspond to each `circle_center_misalignment` | | `world_extrinsics_component_ids` | An array of UUIDs | The camera UUIDs for each world extrinsic in `world_extrinsics` | | `edge_points_x` | An array of 64-bit floats | The X-coordinate of the edge points | | `edge_points_y` | An array of 64-bit floats | The Y-coordinate of the edge points | | `edge_points_z` | An array of 64-bit floats | The Z-coordinate of the edge points | | `edge_distances_from_radius` | An array of arrays of 64 bit floats | The individual edge distances for each circle that has a world extrinsic | | `edge_distances_from_radius_rmse_per_we` | An array of 64 bit floats | The edge distances from radius rmse per world extrinsic | | `edge_distances_from_radius_rmse` | A 64-bit float | The edge distances RMSE from radius over all world extrinsics | ## Analysis[​](#analysis "Direct link to Analysis") Much of the circle edge misalignment metrics compare the difference in the radius of the circle defined by the retroreflective points (pictured below) and how these edge distances compare to where the edge is expected to be aligned in the Circle target to board relationship (as encoded in the object-space). ![Circle Inliers](/assets/images/circle_plane_inliers-ceda93116879605e71c4c7c6bd1c0d5c.png) The math for getting the LiDAR's observed frame (coordinates of the edge points) is the same as described in [the circle misalignment cost](/metrical/results/residual_metrics/circle_misalignment.md#analysis). --- # Circle Misalignment Created by: All Camera-LiDAR pairs ## Overview[​](#overview "Direct link to Overview") Circle misalignment is a metric unique to MetriCal. It's an artifact of the way MetriCal bridges the two distinct modalities of camera (primarily 2D features and projection) and LiDAR (3D point clouds in Euclidean space) via the [camera-lidar target](/metrical/targets/combining_modalities.md). There is one circle misalignment metric group for every circular target in use. ## Definition[​](#definition "Direct link to Definition") Circle misalignment metrics contain the following fields: | Field | Type | Description | | -------------------------------- | ------------------------------------- | -------------------------------------------------------------------------------------------------- | | `metadata` | A common metadata object | The metadata associated with the point cloud this circle target was measured in. | | `object_space_id` | UUID | The UUID of the object space that was being observed. | | `measured_circle_center` | An array of 3 float values | The X/Y/Z coordinate of the center of the circle target, in the LiDAR coordinate frame | | `world_extrinsics_component_ids` | An array of UUIDs | The camera UUIDs for each world extrinsic in `world_extrinsics` | | `world_extrinsics` | An array of world extrinsics objects | The world pose (camera from object space) that correspond to each `circle_center_misalignment` | | `circle_center_misalignment` | An array of circle center coordinates | The errors between the circle center location estimated between each camera and the observed LiDAR | | `circle_center_rmse` | Float | The circle center misalignment RMSE over all world extrinsics. | ## Analysis[​](#analysis "Direct link to Analysis") ### The "Center"[​](#the-center "Direct link to The \"Center\"") Much of the circle misalignment metrics are about bridging the gap between the two modalities. * The center of the circle in *camera space* is the center of the ChArUco board, given its metric dimensions * The center of the circle in *LiDAR space* is the centroid of the planar 2D circle fit from the points detected on the ring of retro-reflective tape. ![Circle Inliers](/assets/images/circle_plane_inliers-ceda93116879605e71c4c7c6bd1c0d5c.png) The `measured_circle_center` above is this LiDAR center; the `circle_center_misalignment` is the error between that LiDAR circle center and the circle center estimated from each camera. This might seem straightforward, but there's a bit more to it than that. Since there are no commonly observable features between cameras and LiDAR, MetriCal has to use a bit of math to make calibration work. Think of the object circle center as our origin; we'll call this CO. The LiDAR circle center is that same point, but in the LiDAR coordinate frame: CL=ΓOL​⋅CO ...and every camera has its own estimate of the circle center w\.r.t the camera board, CC: CC=ΓOC​⋅CO We can relate these centers to one another by using the extrinsics between LiDAR and Camera, ΓCL​: C^L=ΓCL​⋅CC=ΓCL​⋅ΓOC​⋅CO With both CL and C^L in the LiDAR coordinate frame, we can calculate the error between the two and get our `circle_center_misalignment`: ccm=CL−C^L ΓOC​ is what is referred to when we say `world_extrinsics`, and the `world_extrinsics_component_ids` designate what camera that extrinsic relates to. MetriCal calculates these for every pair of synced camera-LiDAR observations. --- # Composed Relative Extrinsics Created by: All Components and Objects ## Overview[​](#overview "Direct link to Overview") These metrics refer to the error between relative extrinsics measurements (that are composed between components and objects) and the current estimated extrinsic. What does this mean? * Components A and B have a *relative extrinsic* formed by object O represented by ΓOA​⋅ΓBO​. * The *current estimated extrinsic* between A and B is just the transform between the two components, ΓBA​. Ideally, these two should be the same: ΓBA​≈ΓOA​⋅ΓBO​ ...but nothing's perfect in optimization. The error between these two is what we capture in the Composed Relative Extrinsics. Since MetriCal optimizes for both components and objects, both components and objects have relative constraints. When the `kind` is a *component* relative extrinsic, each `common_uuid` will refer to object spaces. Correspondingly, when the `kind` is an *object* relative extrinsic, `common_uuid` will refer to components. ![Composed Relative Extrinsics](/assets/images/composed_relative_extrinsics-87acd49525ae70d8b905a8c13747bdde.png) ## Description[​](#description "Direct link to Description") Composed relative extrinsics metrics contain the following fields: | Field | Type | Description | | ------------------------ | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------- | | `kind` | String | The kind of relative extrinsic (either "Component" or "Object"). | | `from` | UUID | The UUID of the "from" coordinate frame. | | `to` | UUID | The UUID of the "to" coordinate frame. | | `extrinsics_differences` | An array of extrinsics objects | The differences from a unit extrinsic when subtracting the composed world extrinsics from the estimated component or object extrinsic. | | `common_uuids` | An array of UUIDs | The "common" UUIDs that link a component relative extrinsic or object relative extrinsic. | --- # Differenced Pose Trajectory Error Created by: All Camera-Local Navigation System pairs ## Overview[​](#overview "Direct link to Overview") Differenced Pose Trajectory Errors describe the the alignment errors from synchronized camera poses (relative to an object-space) compared to poses output by the LNS. These poses are composed by utilizing the component-relative extrinsic between every Camera and LNS. ## Definition[​](#definition "Direct link to Definition") ### Differenced Pose Trajectory Error[​](#differenced-pose-trajectory-error "Direct link to Differenced Pose Trajectory Error") | Field | Type | Description | | -------- | ------------------------------------------------------------ | ---------------------------------------------- | | `errors` | An array of "Differenced Pose Trajectory Observation Groups" | The collection of DPT observation group errors | ### Differenced Pose Trajectory Observation Groups[​](#differenced-pose-trajectory-observation-groups "Direct link to Differenced Pose Trajectory Observation Groups") | Field | Type | Description | | -------------- | ------------------------------------------------------ | ------------------------------------------------------------------------------------------- | | `component_id` | UUID | The UUID of the LNS component | | `other_id` | UUID | The UUID of the "other" component being compared | | `observations` | An array of "Differenced Pose Trajectory Observations" | The double-differenced errors across sequential timestamps between `component` and `other`. | ### Differenced Pose Trajectory Observation[​](#differenced-pose-trajectory-observation "Direct link to Differenced Pose Trajectory Observation") | Field | Type | Description | | ---------------------------- | -------------- | ---------------------------------------------------------------------------------------------------------- | | `t0` | 64 bit integer | The timestamp at `t0` | | `t1` | 64 bit integer | The timestamp at `t1` | | `t1_from_t0_pose_difference` | Pose | The double differenced pose between the `t1`-from-`t0` transforms of `component` and `other` of the group. | | `object_id` | UUID | The UUID of the object space observed by `other`. | ## Analysis[​](#analysis "Direct link to Analysis") The double differencing formula used to compute this deals with four poses and one component relative extrinsics (CRE) transform: 1. LLNS framet1​: The LNS pose at time `t1` (directly observed) 2. LLNS framet0​: The LNS pose at time `t0` (directly observed) 3. Cobjectt1​: The `other`-from-object pose (world extrinsic) at time `t1` 4. Cobjectt0​: The `other`-from-object pose (world extrinsic) at time `t0` 5. ΓLC​: The `other`-from-LNS transform, optimized by the calibration process By knowing these four poses, as well as the optimized transform between the `other` component and the LNS, we can then formulate the error as follows: Ct0t1​=Cobjectt1​⋅(Cobjectt0​)−1 and Lt0t1​=LLNS framet1​⋅(LLNS framet0​)−1 From there, we can then compute the error of the CRE between the two pose differences (this is where the second "difference" of double-differencing comes in) by computing: ϵ=Ct0t1​⋅((ΓCL​)−1⋅Lt0t1​⋅ΓCL​) That ϵ is the `t1_from_t0_pose_difference` as described in the differenced pose trajectory observation. Note that these pose differences are returned as a "pose," which may seem odd at first glance. In truth, these "poses" should be close to identity with the difference being the error across the double-difference operation. For translations, this error is easy to interpret; however, this is not always as easily interpreted for orientations, as the error can come from either of the two estimated `other`-from-object world extrinsics or from the estimated ΓLC​ transform. --- # Image Reprojection Created by: Cameras ## Overview[​](#overview "Direct link to Overview") Reprojection error is the error in the position of a feature in an image, as compared to the position of its corresponding feature in object space. More simply put, it tells you how well the camera model generalizes in the real world. Reprojection is often seen as the only error metric to measure *precision* within an adjustment. Reprojection errors can tell us a lot about the calibration process and provide insight into what image effects were (or were not) properly calibrated for. ## Definition[​](#definition "Direct link to Definition") Image reprojection metrics contain the following fields: | Field | Type | Description | | ----------------- | ------------------------------ | -------------------------------------------------------------------------------------------------- | | `metadata` | A common image metadata object | The metadata associated with the image that this reprojection data was constructed from. | | `object_space_id` | UUID | The UUID of the object space that was observed by the image this reprojection data corresponds to. | | `ids` | An array of integers | The identifiers of the object space features detected in this image. | | `us` | An array of floats | The u-coordinates for each object space feature detected in this image. | | `vs` | An array of floats | The v-coordinates for each object space feature detected in this image. | | `rs` | An array of floats | The radial polar coordinates for each object space feature detected in this image. | | `ts` | An array of floats | The tangential polar coordinates for each object space feature detected in this image. | | `dus` | An array of floats | The error in u-coordinates for each object space feature detected in this image. | | `dvs` | An array of floats | The error in v-coordinates for each object space feature detected in this image. | | `drs` | An array of floats | The error in radial polar coordinates for each object space feature detected in this image. | | `dts` | An array of floats | The error in tangential polar coordinates for each object space feature detected in this image. | | `world_extrinsic` | An extrinsics object | The pose of the camera (camera from object space) for this image. | | `object_xs` | An array of floats | The object space (3D) X-coordinates for each object space feature detected in this image. | | `object_ys` | An array of floats | The object space (3D) Y-coordinates for each object space feature detected in this image. | | `object_zs` | An array of floats | The object space (3D) Z-coordinates for each object space feature detected in this image. | | `rmse` | Float | Root mean square error of the image residuals, in pixels. | ## Camera Coordinate Frames[​](#camera-coordinate-frames "Direct link to Camera Coordinate Frames") MetriCal uses two different conventions for image space points. Both can and should be used when analyzing camera calibration statistics. ### CV (Computer Vision) Coordinate Frame[​](#cv-computer-vision-coordinate-frame "Direct link to CV (Computer Vision) Coordinate Frame") ![xy image space](/assets/images/xy_image_space-f34ff71462d66755a8394839a4346e90.png) This is the standard coordinate system used in computer vision. The origin of the coordinate system is (x0​,y0​), and is located in the upper left corner of the image. When working in this coordinate frame, we use lower-case x and y to denote that these coordinates are in the image, whereas upper-case X, Y, and Z are used to denote coordinates of a point in object space. This coordinate frame is useful when examining feature space, or building histograms across an image. ### UV Coordinate Frame[​](#uv-coordinate-frame "Direct link to UV Coordinate Frame") ![uv image space](/assets/images/uv_image_space-dceffca7fc9b9aa2c993c36b2f3bf244.png) This coordinate frame maintains axes conventions, but instead places the origin at the principal point of the image, labeled as (u0​,v0​). Notice that the coordinate dimensions are referred to as lower-case u and v, to denote that axes are in image space and relative to the principal point. Most charts that deal with reprojection error convey more information when plotted in UV coordinates than CV coordinates. For instance, radial distortion increases proportional to radial distance from the principal point — not the top-left corner of the image. ### Cartesian vs. Polar[​](#cartesian-vs-polar "Direct link to Cartesian vs. Polar") In addition to understanding the different origins of the two coordinate frames, polar coordinates are sometimes used in order to be able to visualize reprojections as a function of radial or tangential differences. When using polar coordinates, points are likewise centered about the principal point. Thus, we go from our previous (u,v) frame to (r,t). | Cartesian | Polar | | --------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- | | ![cartesian point](/assets/images/cartesian_point-7ab5e76641187c7e767d84a8b7eebeff.png) | ![polar point](/assets/images/polar_point-a07357efa392f10b08da1f1141a06494.png) | ## Analysis[​](#analysis "Direct link to Analysis") Below are some useful reprojection metrics and trends that can be derived from the numbers found in `results.json`. ### Feature Coverage Analysis[​](#feature-coverage-analysis "Direct link to Feature Coverage Analysis") **Data Based In**: Either CV or UV Coordinate Frame All image space observations made from a single camera component over the entire calibration process are plotted. This gives us a sense of data coverage over the domain of the image. For a camera calibration process, this chart should ideally have an isometric distribution of points within the image without any large empty spaces. This even spread prevents a camera model from *overfitting* on any one area. ![feature coverage analysis](/assets/images/image_coverage_analysis-36b667f749a0de6d9fb732a15f620d6b.png) In the above example, there are some empty spaces near the periphery of the image. This can happen due to image vignetting (during the capture process), or just merely because one did not move the target to have coverage in that part of the scene during data capture. ### Radial Error - δr vs. r[​](#radial-error---δr-vs-r "Direct link to Radial Error - δr vs. r") **Data Based In**: UV Coordinate Frame The δr vs. r graph is a graph that plots radial reprojection error as a function of radial distance from the principal point. This graph is an excellent way to characterize distortion error, particularly radial distortions. | Expected | Poor Result | | -------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | ![good radial distortion modeling](/assets/images/good_radial_distortion-8db02b56616ff4f5d8210fe748d9f047.png) | ![bad radial distortion modeling](/assets/images/bad_radial_distortion-e661240dc57b7a6bd32d90a88f12b30b.png) | Consider the graph above: This distribution represents a fully calibrated system that has modeled distortion using the Brown-Conrady model. The error is fairly evenly distributed and low, even as one moves away from the principal point of the image. However, were MetriCal configured to *not* to calibrate for a distortion model (e.g. a plex was generated with `metrical init` such that it used the `no_distortion` model), the output would look very different (see the right figure above). Radial error fluctuates in a sinusoidal pattern now, getting worse as we move away from the principal point. Clearly, this camera needs a distortion model of some kind in future calibrations. ### Tangential Error - δt vs. t[​](#tangential-error---δt-vs-t "Direct link to Tangential Error - δt vs. t") **Data Based In**: UV Coordinate Frame Like the δr vs. r graph, δt vs. t plots the tangential reprojection error as a function of the tangential (angular) component of the data about the principal point. This can be a useful plot to determine if any unmodeled tangential (de-centering) distortion exists. The chart below shows an adjustment with tangential distortion correctly calibrated and accounted for. The "Poor Result" shows the same adjustment without tangential distortion modeling applied. | Expected | Poor Result | | ---------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | | ![good tangential distortion modeling](/assets/images/good_tangential_distortion-1990528a484d5fd614e456caafcf1602.png) | ![bad tangential distortion modeling](/assets/images/bad_tangential_distortion-808aa3327327752810c754cb96e65d83.png) | ### Error in u and v[​](#error-in-u-and-v "Direct link to Error in u and v") **Data Based In**: UV Coordinate Frame These plot the error in our Cartesian axes (δu or δv) as a function of the distance along that axis (u or v). Both of these graphs should have their y-axes centered around zero, and should mostly look uniform in nature. The errors at the extreme edges may be larger or more sparse; however, the errors should not have any noticeable trend. | Expected δu vs. u | Expected δv vs. v | | --------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | | ![good du vs u modeling](/assets/images/good_u_residual-23d800d7ce8783e293e2bb0c1d8610fb.png) | ![good dv vs v modeling](/assets/images/good_v_residual-c89b12b6562f990d2017e45531eb950d.png) | ## Unmodeled Intrinsics Indicators[​](#unmodeled-intrinsics-indicators "Direct link to Unmodeled Intrinsics Indicators") There are certain trends and patterns to look out for in the plots above. Many of these can reveal unmodeled intrinsics within a system, like a distortion that wasn't taken into account in this calibration process. A few of these patterns are outlined below. ### Unmodeled Tangential Distortion[​](#unmodeled-tangential-distortion "Direct link to Unmodeled Tangential Distortion") It was obvious that something was amiss when looking at the Poor Result plot for the δt vs. t graph, above. However, depending on the magnitude of error, we may suspect that any effects we see in such a graph are noise. If we then look at the δu vs. u and δv vs. v graphs, we might see the following trends as well: | Unmodeled Tangential - δu vs. u | Unmodeled Tangential - δv vs. v | | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | | ![bad du vs u with tangential distortion](/assets/images/bad_u_residual-852255614513c4bcaf3dad4c9bd91a1e.png) | ![bad dv vs v with tangential distortion](/assets/images/bad_v_residual-19c40620826219907a0a04059043bce8.png) | ## Comparative Analysis[​](#comparative-analysis "Direct link to Comparative Analysis") Beyond the above analyses, the charts provided are incredibly useful when comparing calibrations for the same component over time. Comparing these charts temporally is useful when designing a calibration process, and likewise can be useful in deciding between different models (e.g. Brown-Conrady vs. Kannala-Brandt distortion, etc.). Comparing these charts across components can be helpful if the components are similar (e.g. similar cameras from the same manufacturer). There are some caveats; for example, one cannot compare these charts across two cameras if they have different pixel pitches. Pixel errors from a camera that has 10µm pixels cannot be directly compared to pixel errors from a camera that has 5µm pixels, as the former is 2 times larger than the latter. One might understandably see that the former component has reprojection errors 2 times smaller than the latter, but this would be a false distinction — that difference is due to the difference in size between the pixels of both cameras and not due to some quality of the calibration. ## Pose Data[​](#pose-data "Direct link to Pose Data") The `world_extrinsic` from the image reprojection data represents the pose (camera from object space) of the camera when a given image was taken. Knowing which image is which by uniquely identifying it by its component UUID and timestamp (present in the metadata), these poses can be plotted or used to determine the position of the camera when a given image was taken. This is one way to perform a hand-eye calibration with MetriCal, by extracting the world pose of each image given a timestamp or sequence number. ## Object XYZs[​](#object-xyzs "Direct link to Object XYZs") In addition to image reprojections, the final "unprojected" 3D coordinates of the object space features are also included in image reprojection metrics. This can be used in conjunction with other modality information to determine how an individual image contributed to errors in the final optimized object space. While MetriCal does not have any means to single out and filter an individual image, this can help during the development of a calibration process to ascertain if certain poses or geometries negatively contribute towards the calibration. --- # IMU PreIntegration Error Created by: IMU ## Overview[​](#overview "Direct link to Overview") IMU preintegration error metrics contains all the relevant information to compute a preintegration cost based on a series of navigation states and IMU measurements. In MetriCal, an IMU's *navigation states* represent key points in time where the IMU's position and velocity can be temporally related to another component, e.g. a camera. MetriCal uses these moments to define a *preintegration window* for the IMU, which in turn will produce a *local increment*, also known as a preintegrated measurement. See the [Analysis](#analysis) section for a more detailed explanation of preintegration. ![IMU Preintegration](/assets/images/imu_preintegration-770a17e7c3075dd60382ef0d76545c3f.png) ## Definition[​](#definition "Direct link to Definition") IMU preintegration error metrics contain the following fields: | Field | Type | Description | | ---------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | | `nav_component_id` | UUID | The UUID of the IMU component this metric was generated from. | | `initial_gyro_bias` | An array of 3 float values | The XYZ components denoting the initial gyro bias (units of radians / second) | | `initial_accelerometer_bias` | An array of 3 float values | The XYZ components denoting the initial accelerometer bias (units of meters / seconds2) | | `start_navigation_states` | An array of extrinsics-velocity objects | The inferred starting navigation states. | | `end_navigation_states` | An array of extrinsics-velocity objects | The inferred ending navigation states. | | `local_increments` | An array of extrinsics-velocity objects | The local increments of the preintegration before bias correction. | | `misalignments` | An array of extrinsics-velocity objects | The residual errors of the preintegration. The misalignment between the preintegration and the inferred preintegration based on the navigation states. | | `preintegration_times` | An array of floats | The preintegration horizon. The change in time between the start and end navigation states. | | `gyro_biases` | An array of arrays of 3 float values | An array of the inferred XYZ gyro biases of the preintegration. | | `accelerometer_biases` | An array of arrays of 3 float values | An array of the inferred XYZ accelerometer biases of the preintegration. | ## Analysis[​](#analysis "Direct link to Analysis") ### Navigation States and Preintegration[​](#navigation-states-and-preintegration "Direct link to Navigation States and Preintegration") In order to understand preintegration, it's helpful to understand what would happen without it. In a naive IMU integration, every new IMU measurement is integrated back into the same starting frame to produce a new navigation state. This is called *IMU mechanization*. ![Global increment](/assets/images/global_increment-3a2bf9f5441e828f62c36636b69c6c0c.png) However, this approach has some serious problems. For one, any measurements out-of-sequence will make the computation more difficult, since every measurement state relies on the one before it. Moreover, IMU measurements will only help us *propagate* our navigation state estimate, not correct it. In such mechanized motion models, every subsequent navigation state becomes more uncertain if we don't have any auxiliary information to correct the state. Instead, MetriCal uses *preintegration* to solve these problems. Preintegration is the process of reorganizing the state integration from a global frame into *local increments* between navigation states. ![Local increment](/assets/images/local_increment-e8caa304d196e83d7aa8a45f34c13ae9.png) This small reformulation allows us to address most of the problems created by global state propagation, and is more computationally efficient to boot. Because of this shift in thinking, local increments don't even need to account for the starting navigation state's velocity nor correct specific-force measurements to accelerations. The local increment is simply the integral of intrinsically corrected IMU measurements between the start and the end navigation states. MetriCal optimizes the calibration and start and end navigation states to align with this preintegrated local increment. IMU Preintegration Basics If you want to learn more about IMU preintegration, we wrote a whole series on IMUs on the Tangram Vision Blog! There's a lot to know; we don't get to preintegration until [Part 5](https://www.tangramvision.com/blog/imu-preintegration-basics-part-5-of-5#imu-preintegration). --- # Interior Points to Plane Error Created by: All Camera-LiDAR pairs (optional) ## Overview[​](#overview "Direct link to Overview") Interior Points to Plane error reflects the error in fit between the LiDAR points observed on the surface of a circular target, and the actual target. Unlike other metrics, this metric is not generated in every run. This metric is only produced if the `detect_interior_points` flag is set to `true` in the `circle` object space detector. ![Interior Points to Plane](/assets/images/interior_points_to_plane-f88def986c2b3462c762636235324509.png) ## Definition[​](#definition "Direct link to Definition") Interior Points to Plane Error metrics contain the following fields: | Field | Type | Description | | -------------------------------- | ------------------------------------ | --------------------------------------------------------------------------------------------------------------- | | `metadata` | A common metadata object | The metadata associated with the point cloud this circle target was measured in. | | `object_space_id` | UUID | The UUID of the object space that was being observed. | | `plane_inliers_x` | An array of floats | All X-coordinates of points contained within the circle target. | | `plane_inliers_y` | An array of floats | All Y-coordinates of points contained within the circle target. | | `plane_inliers_z` | An array of floats | All Z-coordinates of points contained within the circle target. | | `world_extrinsics_component_ids` | An array of UUIDs | The camera UUIDs for each world extrinsic in `world_extrinsics` | | `world_extrinsics` | An array of world extrinsics objects | The world pose (camera from object space) that correspond to each `circle_center_misalignment` | | `plane_inliers_distances` | An array of arrays of floats | The point-to-plane distances between the plane inlier points and the plane observed at a given world extrinsic. | | `plane_distance_rmse_per_we` | An array of floats | Similar to `plane_inliers_distances` but represents the RMSE of all plane inlier distances. | | `plane_distance_rmse` | Float | The plane inlier distance RMSE over all world extrinsics. | ## Analysis[​](#analysis "Direct link to Analysis") This metric can be considered a companion metric to [Circle Misalignment](/metrical/results/residual_metrics/circle_misalignment.md). While circle misalignment measures the error between the observed circle center in LiDAR space and the circle center estimated from each camera, interior points to plane error uses those same derived extrinsics to estimate the error in fit between the LiDAR points observed on the surface of the circle target and the actual target. ### Why Optional?[​](#why-optional "Direct link to Why Optional?") This metric is not generated in every run; it's opt-in. This is because poor quality LiDAR can often given erroneous and erratic distance readings, even on flat surfaces. Using these observations in a calibration would just make things worse, not better. By making this metric optional, we allow the user the option to disregard these readings and just work with the circle center for calibration. --- # Object Inertial Extrinsics Error Created by: IMU ## Overview[​](#overview "Direct link to Overview") These metrics refer to the error between a sequence of measured and optimized extrinsics involving the IMU. Put in mathematical terms: ΓOE​≈ΓIMUE​⋅ΓCIMU​⋅ΓOC​ * E is the inertial frame, a gravity-aligned frame with its origin coincident with the first IMU navigation state * O is the object space * IMU is the current IMU navigation state * C is any component that can observe the object space directly, e.g. a camera The transform ΓOE​ is the *object inertial extrinsic*. This calculation is performed for every navigation state of the IMU, since a navigation state directly corresponds to a synced component's observations to the object space in question. Note that only one of these metrics is created per object space. This metric is closely related to [Composed Relative Extrinsics](/metrical/results/residual_metrics/composed_relative_extrinsics.md); the strategy is basically the same. However, since IMU cannot directly observe objects, we use the navigation states to infer the relative extrinsics. ![Object Inertial Extrinsics](/assets/images/oie_error-48ac5c97679eac82689c6338b0fe8943.png) ## Definition[​](#definition "Direct link to Definition") Object inertial extrinsics error metrics contain the following fields: | Field | Type | Description | | ------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------- | | `nav_component_id` | UUID | The UUID of the IMU component this metric was generated from. | | `navigation_states` | An array of extrinsics-velocity objects | The inferred navigation states of the object inertial extrinsic cost. | | `world_extrinsics` | An array of world extrinsics objects | The inferred world pose (component from world) of the object inertial extrinsic, ΓOC​ | | `component_relative_extrinsics` | An array of extrinsics objects | The inferred component relative extrinsics, ΓCIMU​ | | `object_inertial_extrinsics` | An array of extrinsics objects | The inferred object inertial extrinsics, ΓOIMU​ | | `misalignments` | An array of extrinsics objects | The residual error of each object inertial extrisnic. | --- # Paired 3D Point Error Created by: LiDAR-LiDAR pairs ## Overview[​](#overview "Direct link to Overview") LiDAR points can't be directly compared to each other, but their alignment can be inferred. When MetriCal calibrates two lidars to one another, it uses the detected circle centers from each LiDAR frame of reference to optimize. The difference in these circle centers is the Paired 3D Point Error. Note that the only LiDAR points used in this calculation are those detected on the retroreflective edge of the circle target; the interior points of the board are not used. There is a Paired 3D Point Error metric group for every pair of synced observations between LiDARs. ![Paired 3D Point Error](/assets/images/paired_3d_point-c132448357adbc0922fccd419fea7588.png) ## Description[​](#description "Direct link to Description") Paired 3D point error metrics contain the following fields: | Field | Type | Description | | --------------- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | | `from` | UUID | The "from" component that these point misalignments were computed in reference to. | | `to` | UUID | The "to" component that these point misalignments were computed in reference to. | | `from_points` | An array of arrays of 3 floats | A collection of the XYZ coordinates of points in the "from" coordinate frame being matched against the `to_points`. | | `to_points` | An array of arrays of 3 floats | A collection of the XYZ coordinates of points in the "to" coordinate frame being matched against the `from_points`. | | `misalignments` | An array of arrays of 3 floats | The transformed distance misalignment (spilt up according to the Cartesian / XYZ axes) between the to and from points. | | `rmse` | Float | The root-mean-square-error of all the misalignments. | --- # Paired Plane Normal Error Created by: LiDAR-LiDAR pairs ## Overview[​](#overview "Direct link to Overview") LiDAR points can't be directly compared to each other, but their alignment can be inferred. When MetriCal calibrates two lidars to one another, it uses the detected plane normals of the circle target from each LiDAR frame of reference to optimize. The difference in these normals is the Paired Plane Normal Error. Note that the only LiDAR points used in this calculation are those detected on the retroreflective edge of the circle target; the interior points of the board are not used. There is a Paired Plane Normal Error metric group for every pair of synced observations between LiDARs. ![Paired Plane Error](/assets/images/paired_plane-66816dbb6e898ef2caff8adb984f2769.png) ## Description[​](#description "Direct link to Description") Paired plane normal error metrics contain the following fields: | Field | Type | Description | | --------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------- | | `from` | UUID | The "from" component that these normal misalignments were computed in reference to. | | `to` | UUID | The "to" component that these normal misalignments were computed in reference to. | | `from_normals` | An array of arrays of 3 floats | A collection of the XYZ orientations of plane normals in the "from" coordinate frame being matched against the `to_normals`. | | `to_normals` | An array of arrays of 3 floats | A collection of the XYZ coordinates of plane normals in the "to" coordinate frame being matched against the `from_normals`. | | `misalignments` | An array of arrays of 3 floats | The transformed chordal distance misalignment (spilt up according to the Cartesian / XYZ axes) between the to and from plane normals. | | `rmse` | Float | The root-mean-square-error of all the misalignments. | --- # Calibrate RealSense Sensors Get the Code The full code for this tutorial can be found in the [RealSense Cal Flasher repository](https://gitlab.com/tangram-vision/oss/realsense-cal-flasher) on GitLab. ## Objectives[​](#objectives "Direct link to Objectives") * Record proper data from all RealSense sensors at the same time * Calibrate several RealSense sensors at once using MetriCal * Flash calibrations to each sensor *** One of the most popular applications of MetriCal is calibrating multiple Intel RealSense sensors at once. RealSense is one of the most popular 3D sensing technologies, and it's common to see multiple RealSense deployed on a system (even if they aren't all being used!). As with any sensor system, the calibration on a RealSense can drift over time, whether due to wear and tear, installation changes, or environmental factors. MetriCal can help you keep your RealSense sensors calibrated and accurate with minimal effort. This tutorial will focus on the D4xx series. ## Recording Data[​](#recording-data "Direct link to Recording Data") MetriCal is meant for convenience, so we recommend recording all sensors at the same time, in the same dataset. There is no need for multiple recordings! Every RealSense device streams rectified data unless set to a specific frame format and resolution. In other words, most data will have already applied the current calibration to the frame! We need to avoid this in our own setup. Here are the settings you'll want to apply for a calibration dataset: | Device Model | IR Stream Format | Stream Resolution | | ---------------- | ---------------- | ----------------- | | D400, D410, D415 | Y16 | 1920x1080 | | All other models | Y16 | 1280x800 | | Device Model | Color Stream Format | Stream Resolution | | ----------------- | ------------------- | ----------------- | | D415, D435, D435i | YUY2 | 1920x1080 | | D455 | YUY2 | 1280x800 | Easy Recording with RealSense-Rust Tangram Vision is also the maintainer of the [RealSense-Rust crate](https://gitlab.com/tangram-vision/oss/realsense-rust), which makes it easy to run and record RealSense data from the safety of Rust. The `record_bag` example in the repository is already configured with the above settings. Use that to quickly record a calibration ROSbag for MetriCal. ### Avoiding Motion Blur[​](#avoiding-motion-blur "Direct link to Avoiding Motion Blur") Whether or not a camera is global shutter (as many are on the RealSense line), motion blur can still affect image quality. This is the last thing you want when calibrating a camera, as it can lead to inaccurate results regardless of whether you're moving the sensor rig or the targets. To avoid motion blur, make sure to stand still every second or so during calibration. MetriCal will filter out frames in motion automatically when processing the data. For more on this and other tips for calibrating cameras, see the [Single Camera Calibration Guide](/metrical/calibration_guides/single_camera_cal.md). ### Identifying the Sensors[​](#identifying-the-sensors "Direct link to Identifying the Sensors") Make sure you've recorded the sensor data in a way that makes each stream identifiable. If you're recording in a ROSbag or MCAP format, it's enough to prefix each device topic with its serial or name, like `sensor1/color/image_raw` and `sensor2/color/image_raw`. If recording to a folder of images, keep in mind that MetriCal will expect all *component* folders to be in a single directory. This means that if you have two RealSense sensors, you'll need to record the data from both to the same folder: ``` observations/ |- sensor1-color/ |- 0001.png |- 0002.png |- ... |- sensor1-ir-left/ |- 0001.png |- 0002.png |- ... |- sensor1-ir-right/ |- 0001.png |- 0002.png |- ... |- sensor2-color/ |- 0001.png |- 0002.png |- ... ... ``` See the [input data formats documentation](/metrical/configuration/data_formats.md) for more. ### The Right Target[​](#the-right-target "Direct link to The Right Target") We at Tangram Vision always recommend using a [markerboard](/metrical/targets/target_overview.md) for camera calibration. In fact, you can use multiple boards, which would ensure good coverage and depth of field! See our documentation on [using multiple targets](/metrical/targets/multiple_targets.md) for more. ## Running MetriCal[​](#running-metrical "Direct link to Running MetriCal") Once you have the data all ready, running MetriCal is as easy as running any other dataset. For the sake of example, we'll run it from the CLI instead of the manifest, but feel free to use whichever method you prefer. ``` metrical init -m *:opencv-radtan $DATA $INIT_PLEX metrical calibrate -o realsense_cal.json $DATA $INIT_PLEX $OBJ # Get our calibrated plex from the results using the mcap CLI tool mcap get attachment --name optimized-plex results.mcap > realsense_cal_plex.json ``` Notice that we're setting all cameras (all topics: `*`) to use the `opencv-radtan` model. This is the model that Intel uses for all cameras on the RealSense line. And that's about it! This whole process can take from 10 seconds to 3 minutes, depending on how much data you used. You should now have a set of calibrations for all components, which we'll then flash to the sensors in the next step. ## Flashing the Calibration to the Sensors[​](#flashing-the-calibration-to-the-sensors "Direct link to Flashing the Calibration to the Sensors") Now here's the nifty part: We can script this entire process! The results of MetriCal can be flashed directly to the RealSense sensors using an OSS tool created by Tangram Vision called [RealSense Cal Flasher](https://gitlab.com/tangram-vision/oss/realsense-cal-flasher). If you haven't, clone the repository and run through the "Setup" instructions. ### Sensor Config[​](#sensor-config "Direct link to Sensor Config") Once you've set things up, it's time to create your sensor config. This is another JSON we use especially for the RealSense Cal Flasher. It maps topics from our plex to the correct RealSense sensor using the serial number. Here's an example: ``` { "sensor_serial_one": { "left": "name_of_the_left_component_in_plex", "right": "name_of_the_right_component_in_plex", }, "sensor_serial_two": { ... } ... } ``` Fill that bad boy in, then run the flasher: ``` ./rs-cal-flasher realsense_cal_plex.json .json ``` All of your RealSense sensors should now be calibrated and ready to go! ## Summary[​](#summary "Direct link to Summary") 1. Take a calibration dataset 2. Run the following script: ``` metrical init -m *:opencv-radtan $DATA $INIT_PLEX metrical calibrate -o realsense_cal.json $DATA $INIT_PLEX $OBJ mcap get attachment --name optimized-plex results.mcap > realsense_cal_plex.json ./rs-cal-flasher realsense_cal_plex.json .json ``` Boom. Done. ## Links and Resources[​](#links-and-resources "Direct link to Links and Resources") * Intel RealSense D400 Series Custom Calibration Whitepaper: * The Dynamic Calibration Programmer documentation for RealSense: --- # Migrate from Kalibr to MetriCal One of the most common questions we get is how to migrate from Kalibr to MetriCal. This tutorial will guide you through the process of migration, the main differences in operation, and what you should expect. ## Why Migrate?[​](#why-migrate "Direct link to Why Migrate?") We'll freely admit, Kalibr makes a lot of sense when first getting started with camera calibration. It's accurate, documented, and has a large user base. On top of that, it's open source, which is a huge draw for those who like to dive into the math. However, it is **not** designed for scalable production. It is an academic tool for small-scale projects. MetriCal is more robust, scalable, and maintainable, with a whole company devoted to continuously improving its user experience. Some improvements you can expect: * Drastically improved time to calibration (>10x improvement on large datasets) * Use multiple targets, not just one * Equal or greater accuracy in calibration results * Easily-digestable metrics across every component * More modalities supported: Cameras, IMU, LiDAR, Local Navigation Systems * More intrinsics models * Complete reporting of parameter covariances * Enhanced visualization tools ...among others. The good news is that much of your data collection processes can remain the same! MCAPs are fully supported, (as well as [ROSbags converted to MCAPs](/metrical/configuration/data_formats.md)), and MetriCal can use [Kalibr-style AprilGrids](/metrical/targets/camera_targets.md) as targets. Whatever you're doing for data collection, just keep doing it! Once you're ready to scale your systems, we're confident that you'll find MetriCal to be the right tool for the job. ## Operational Differences[​](#operational-differences "Direct link to Operational Differences") ### Camchains vs. Plex[​](#camchains-vs-plex "Direct link to Camchains vs. Plex") The most significant difference between Kalibr and MetriCal is the way they handle extrinsics. Kalibr uses a "camchain" structure, where each component is connected to the next in a chain. In order to derive the extrinsics between two cameras, you must traverse the chain from one camera to the other. MetriCal uses the [plex](/metrical/core_concepts/plex_overview.md) structure, where each component is connected to every other component in a fully connected graph, along with a measurement of uncertainty along each connection. This means that there are actually several ways to derive the spatial constraints between two components, depending on the path you take through the graph. Spatial Constraint Traversal You can read more about how extrinsics are treated in the Plex by perusing the [Spatial Constraint Traversal](/metrical/core_concepts/constraints.md#spatial-constraints) documentation. In practice, it's best to use the [Shape](/metrical/commands/calibration/shape/shape_overview.md) command to derive the "best" (i.e. most precise) path between two components. If you would like to program your own pathfinding algorithm, you can use our reference implementation [found here](https://gitlab.com/tangram-vision/oss/spatial_constraint_traversal). ### Motion Filtering[​](#motion-filtering "Direct link to Motion Filtering") Artifacts from excessive motion can ruin a calibration. To prevent this fate, MetriCal has a built-in motion filter that automatically identifies and removes frames with excessive motion from your dataset. This is especially useful for large datasets where most of the data can be either redundant or noisy. Motion Filtering Read more about the motion filter in the documentation for [Calibrate](/metrical/commands/calibration/calibrate.md#the-motion-filter) mode. Those used to Kalibr will be familiar with the `bag-freq` parameter, which is used to remove data in a more heuristic way by taking every Nth observation. MetriCal uses a fundamentally different philosophy; with the right data capture process, there should never be a reason to throw away data without cause. If you do find that you need to downsample your data, or you would like to use all your data during a calibration, we recommend modifying the [camera motion threshold](/metrical/commands/calibration/calibrate.md#camera-motion-threshold). ## Migrations[​](#migrations "Direct link to Migrations") ### Camera Model Conversion[​](#camera-model-conversion "Direct link to Camera Model Conversion") Here's a handy conversion chart for Kalibr camera models to MetriCal camera models: | Kalibr Model | Corresponding MetriCal Model | | ---------------- | -------------------------------------------------------------------------- | | `pinhole` | [`no-distortion`](/metrical/calibration_models/cameras.md#no-distortion) | | `pinhole-radtan` | [`opencv-radtan`](/metrical/calibration_models/cameras.md#opencv-radtan) | | `pinhole-equi` | [`opencv-fisheye`](/metrical/calibration_models/cameras.md#opencv-fisheye) | | `omni` | [`omni`](/metrical/calibration_models/cameras.md#omnidirectional-omni) | | `ds` | [`double-sphere`](/metrical/calibration_models/cameras.md#double-sphere) | | `eucm` | [`eucm`](/metrical/calibration_models/cameras.md#eucm) | ### Describing Your Target/Object Space[​](#describing-your-targetobject-space "Direct link to Describing Your Target/Object Space") The conversion between Kalibr and MetriCal is straightforward. The only difference is that MetriCal can support more than one target at a time, which means MetriCal has to add UUIDs to each. Here's an example of how to convert a Kalibr target YAML to a MetriCal object space JSON: #### Kalibr Target YAML[​](#kalibr-target-yaml "Direct link to Kalibr Target YAML") ``` target_type: "aprilgrid" tagCols: 5 tagRows: 6 tagSize: 0.088 tagSpacing: 0.3 ``` #### MetriCal Object Space JSON[​](#metrical-object-space-json "Direct link to MetriCal Object Space JSON") ``` { "object_spaces": { "24e6df7b-b756-4b9c-a719-660d45d796bf": { "descriptor": { "variances": [1e-6, 1e-6, 1e-6] // Recommended default values }, "detector": { "april_grid": { "marker_dictionary": "ApriltagKalibr", "marker_grid_height": 6, "marker_grid_width": 5, "marker_length": 0.088, "tag_spacing": 0.3 } } } } } ``` ### Script Migration[​](#script-migration "Direct link to Script Migration") If you're used to running Kalibr from the command line, you'll find that MetriCal is quite similar; it's just more verbose. Here's an example of migrating a calibration script from Kalibr to MetriCal: #### Kalibr Command[​](#kalibr-command "Direct link to Kalibr Command") ``` kalibr_calibrate_cameras \ --bag data.bag \ --topics /cam_one /cam_two \ --models pinhole eucm \ --target target.yaml ``` #### MetriCal Manifest[​](#metrical-manifest "Direct link to MetriCal Manifest") ``` ... # Project metadata [variables.object-space] description = "Path to the input object space JSON file." value = "my_aprilgrid_target.json" [stages.init] command = "init" dataset = "data.bag" topic-to-model = [["/cam_one", "opencv-radtan"], ["/cam_two", "eucm"]] ... # ...other options... initialized-plex = "{{auto}}" [stages.calibrate] command = "calibrate" dataset = "data.bag" input-plex = "{{init.initialized-plex}}" input-object-space = "{{variables.object-space}}" ... # ...other options... results = "{{auto}}" # Derive the Minimum Spanning Tree of the Plex [stages.shape] command = "shape-mst" input-plex = "{{calibrate.results}}" shaped-plex = "{{auto}}" ``` --- # Billing ## Creating a Subscription[​](#creating-a-subscription "Direct link to Creating a Subscription") To create a license for MetriCal, you must have an active (paid) subscription. To create a subscription, navigate to the Billing page in the Tangram Vision Hub. Choose the plan that you want to subscribe to, and click the “Subscribe” button. You will be sent to a Stripe billing page to enter payment information. ## Cancelling a Subscription[​](#cancelling-a-subscription "Direct link to Cancelling a Subscription") To cancel a subscription, navigate to the “Billing” page in the Hub and click “Manage Subscription” to enter the Stripe portal. Click “Cancel plan”. ## Adding, Changing, & Deleting Forms of Payment[​](#adding-changing--deleting-forms-of-payment "Direct link to Adding, Changing, & Deleting Forms of Payment") To access the Stripe interface and make a change to a payment method, navigate to the “Billing” page in the Hub. Click on the “Add Payment Method” button to add a credit card. If you have a paid subscription, manage your payment details via the “Manage Subscription” button. ## Adding & Updating Billing Contact[​](#adding--updating-billing-contact "Direct link to Adding & Updating Billing Contact") Each Tangram Vision account must have an up-to-date billing contact email. The default email address used will be the email address used when you created your Tangram Vision account. Should you wish to change the billing contact associated with your account, navigate to the Billing page and find the “Billing Contact” section. In the email field, enter the updated email address and click “Save Changes.” ![Screenshot of form for updating billing email address](/assets/images/hub_update_billing_contact-f35bf70527a46cb37fb02190fa235a7a.png) --- # Contact Us ## User Support[​](#user-support "Direct link to User Support") Your experience with MetriCal is important to us! If you have any questions, comments, or feedback, please reach out to us at . We appreciate your input and are here to help you get the most out of MetriCal. ## Partnering, White Labeling, and Licensing Inquiries[​](#partnering-white-labeling-and-licensing-inquiries "Direct link to Partnering, White Labeling, and Licensing Inquiries") MetriCal is a commercial product intended for enterprise-level autonomy at scale. We at Tangram Vision pride ourselves on providing the best possible calibration experience for our customers, and we're happy to work with you to find a licensing solution that fits your needs. If you are interested in white labeling or licensing MetriCal for your own use, please reach out to us at ## How to Cite MetriCal[​](#how-to-cite-metrical "Direct link to How to Cite MetriCal") First off, we're thrilled to help you further your research in perception! Please cite MetriCal to acknowledge its contribution to your work. This can be done by including a reference to MetriCal in the software or methods section of your paper. Suggested citation format: ``` @software{MetriCal, title = {MetriCal: Multimodal Calibration for Robotics and Automation at Scale}, author = {{Tangram Vision Development Team}}, url = {https://www.tangramvision.com}, version = {insert version number}, date = {insert date of usage}, year = {2025}, publisher = {{Tangram Robotics, Inc.}}, address = {Online}, } ``` Please replace "insert version number" with the version of MetriCal you used and "insert date of usage" with the date(s) you used the tool in your research. This citation format helps ensure that MetriCal's development team receives appropriate credit for their work and facilitates the tool's discovery by other researchers. --- # Legal Information ## Disclaimers[​](#disclaimers "Direct link to Disclaimers") Tangram Vision, Tangram Vision Platform, Plex, and the Tangram Vision logo are trademarks ™ of Tangram Robotics, Inc. The following Tangram Vision technologies and Platform components are protected under U.S. Patent No. EP4384360A2 (and/or other jurisdictions as applicable): Plex, components, constraints, and calibration processes. The [Narrow Field-of-View Camera Calibration Method](/metrical/calibration_guides/narrow_fov_cal.md) outlined in the linked MetriCal Calibration Guide is patent pending. All content herein © 2021-2025 Tangram Robotics, Inc. All rights reserved. ## Confidentiality[​](#confidentiality "Direct link to Confidentiality") The information contained in the Tangram Vision Documentation is privileged and only for the information of the intended recipient and may not be modified, published or redistributed without the prior written consent of Tangram Robotics, Inc. ## 3rd Party Licenses[​](#3rd-party-licenses "Direct link to 3rd Party Licenses") The Tangram Vision Ecosystem of software is built in part thanks to the following Free and Open Source projects. We have provided links to each project as well as the original license text below. ### Apache 2.0 License[​](#apache-20-license "Direct link to Apache 2.0 License") 🔍 3rd party code licensed with Apache 2.0 * [ab\_glyph](//github.com/alexheretic/ab-glyph) * [ab\_glyph\_rasterizer](//github.com/alexheretic/ab-glyph) * [accesskit](//github.com/AccessKit/accesskit) * [accesskit\_consumer](//github.com/AccessKit/accesskit) * [accesskit\_unix](//github.com/AccessKit/accesskit) * [accesskit\_winit](//github.com/AccessKit/accesskit) * [addr2line](//github.com/gimli-rs/addr2line) * [adler](//github.com/jonas-schievink/adler/blob/master/LICENSE-APACHE) * [aead](//github.com/RustCrypto/traits) * [aes-gcm](//github.com/RustCrypto/AEADs) * [aes-soft](//github.com/RustCrypto/block-ciphers) * [aes](//github.com/RustCrypto/block-ciphers) * [ahash](//github.com/tkaitchuck/ahash) * [allocator-api2](//github.com/zakarumych/allocator-api2) * [anstream](//github.com/rust-cli/anstyle.git) * [anstyle-parse](//github.com/rust-cli/anstyle.git) * [anstyle-query](//github.com/rust-cli/anstyle) * [anstyle](//github.com/rust-cli/anstyle.git) * [anyhow](//github.com/dtolnay/anyhow) * [approx](//github.com/brendanzab/approx/blob/master/LICENSE) * [arboard](//github.com/1Password/arboard) * [arc-swap](//github.com/vorner/arc-swap) * [argmin-math](//github.com/argmin-rs/argmin) * [argmin](//github.com/argmin-rs/argmin) * [array-init-cursor](//github.com/planus-org/planus) * [array-init](//github.com/Manishearth/array-init/) * [arrayvec](//github.com/bluss/arrayvec) * [arrow-array](//github.com/apache/arrow-rs) * [arrow-buffer](//github.com/apache/arrow-rs) * [arrow-data](//github.com/apache/arrow-rs) * [arrow-format](//github.com/DataEngineeringLabs/arrow-format) * [arrow-schema](//github.com/apache/arrow-rs) * [as-raw-xcb-connection](//github.com/psychon/as-raw-xcb-connection) * [ascii](//github.com/tomprogrammer/rust-ascii) * [ash](//github.com/MaikKlein/ash) * [asn1-rs](//github.com/rusticata/asn1-rs.git) * [asn1-rs-derive](//github.com/rusticata/asn1-rs.git) * [asn1-rs-impl](//github.com/rusticata/asn1-rs.git) * [async-attributes](//github.com/async-rs/async-attributes) * [async-broadcast](//github.com/smol-rs/async-broadcast) * [async-channel](//github.com/smol-rs/async-channel) * [async-executor](//github.com/smol-rs/async-executor) * [async-fs](//github.com/smol-rs/async-fs) * [async-global-executor](//github.com/Keruspe/async-global-executor) * [async-io](//github.com/smol-rs/async-io) * [async-lock](//github.com/smol-rs/async-lock) * [async-net](//github.com/smol-rs/async-net) * [async-once-cell](//github.com/danieldg/async-once-cell) * [async-process](//github.com/smol-rs/async-process) * [async-signal](//github.com/smol-rs/async-signal) * [async-std](//github.com/async-rs/async-std) * [async-task](//github.com/smol-rs/async-task) * [async-trait](//github.com/dtolnay/async-trait) * [atomic-waker](//github.com/smol-rs/atomic-waker) * [atspi-common](//github.com/odilia-app/atspi) * [atspi-connection](//github.com/odilia-app/atspi/) * [atspi-proxies](//github.com/odilia-app/atspi) * [atspi](//github.com/odilia-app/atspi) * [az](//gitlab.com/tspiteri/az) * [backtrace-ext](//github.com/gankra/backtrace-ext) * [backtrace](//github.com/rust-lang/backtrace-rs) * [base16ct](//github.com/RustCrypto/formats/tree/master/base16ct) * [base64](//github.com/marshallpierce/rust-base64) * [base64ct](//github.com/RustCrypto/formats/tree/master/base64ct) * [bimap](//github.com/billyrieger/bimap-rs/) * [bit-set](//github.com/contain-rs/bit-set) * [bit-vec](//github.com/contain-rs/bit-vec) * [bit\_field](//github.com/phil-opp/rust-bit-field) * [bitflags](//github.com/bitflags/bitflags) * [bitstream-io](//github.com/tuffy/bitstream-io) * [block-buffer](//github.com/RustCrypto/utils) * [blocking](//github.com/smol-rs/blocking) * [bstr](//github.com/BurntSushi/bstr) * [bumpalo](//github.com/fitzgen/bumpalo) * [bytemuck](//github.com/Lokathor/bytemuck) * [bytemuck\_derive](//github.com/Lokathor/bytemuck) * [cacache](//github.com/zkat/cacache-rs) * [cdr](//github.com/hrektts/cdr-rs) * [cfg-if](//github.com/alexcrichton/cfg-if) * [chrono](//github.com/chronotope/chrono) * [chunked\_transfer](//github.com/frewsxcv/rust-chunked-transfer) * [cipher](//github.com/RustCrypto/traits) * [clap-verbosity-flag](//github.com/clap-rs/clap-verbosity-flag) * [clap](//github.com/clap-rs/clap) * [clap\_builder](//github.com/clap-rs/clap) * [clap\_complete](//github.com/clap-rs/clap/tree/master/clap_complete) * [clap\_derive](//github.com/clap-rs/clap/tree/master/clap_derive) * [clap\_lex](//github.com/clap-rs/clap/tree/master/clap_lex) * [clean-path](//gitlab.com/foo-jin/clean-path) * [codespan-reporting](//github.com/brendanzab/codespan) * [colorchoice](//github.com/rust-cli/anstyle) * [colorgrad](//github.com/mazznoer/colorgrad-rs) * [concurrent-queue](//github.com/smol-rs/concurrent-queue) * [const-oid](//github.com/RustCrypto/formats/tree/master/const-oid) * [const-random-macro](//github.com/tkaitchuck/constrandom) * [const-random](//github.com/tkaitchuck/constrandom) * [const\_fn](//github.com/taiki-e/const_fn) * [const\_soft\_float](//github.com/823984418/const_soft_float) * [constgebra](//github.com/knickish/constgebra) * [cookie](//github.com/SergioBenitez/cookie-rs) * [cpufeatures](//github.com/RustCrypto/utils) * [cpuid-bool](//github.com/RustCrypto/utils) * [crc32fast](//github.com/srijs/rust-crc32fast) * [crossbeam-channel](//github.com/crossbeam-rs/crossbeam) * [crossbeam-deque](//github.com/crossbeam-rs/crossbeam) * [crossbeam-epoch](//github.com/crossbeam-rs/crossbeam) * [crossbeam-queue](//github.com/crossbeam-rs/crossbeam) * [crossbeam-utils](//github.com/crossbeam-rs/crossbeam) * [crossbeam](//github.com/crossbeam-rs/crossbeam) * [crypto-bigint](//github.com/RustCrypto/crypto-bigint) * [crypto-common](//github.com/RustCrypto/traits) * [crypto-mac](//github.com/RustCrypto/traits) * [csscolorparser](//github.com/mazznoer/csscolorparser-rs) * [ctr](//github.com/RustCrypto/stream-ciphers) * [cursor-icon](//github.com/rust-windowing/cursor-icon) * [custom\_derive](//github.com/DanielKeep/rust-custom-derive/tree/custom_derive-master) * [data-url](//github.com/servo/rust-url) * [der](//github.com/RustCrypto/formats/tree/master/der) * [der-parser](//github.com/rusticata/der-parser.git) * [deranged](//github.com/jhpratt/deranged) * [digest](//github.com/RustCrypto/traits) * [directories](//github.com/soc/directories-rs) * [dirs](//github.com/soc/dirs-rs) * [dirs-next](//github.com/xdg-rs/dirs) * [dirs-sys](//github.com/dirs-dev/dirs-sys-rs) * [dns-lookup](//github.com/keeperofdakeys/dns-lookup/) * [document-features](//github.com/slint-ui/document-features) * [downcast-rs](//github.com/marcianx/downcast-rs) * [drawille](//github.com/ftxqxd/drawille-rs) * [dyn-clone](//github.com/dtolnay/dyn-clone) * [ecdsa](//github.com/RustCrypto/signatures/tree/master/ecdsa) * [ecolor](//github.com/emilk/egui) * [ed25519](//github.com/RustCrypto/signatures/tree/master/ed25519) * [eframe](//github.com/emilk/egui/tree/master/crates/eframe) * [egui-wgpu](//github.com/emilk/egui/tree/master/crates/egui-wgpu) * [egui-winit](//github.com/emilk/egui/tree/master/crates/egui-winit) * [egui](//github.com/emilk/egui) * [egui\_commonmark](//github.com/lampsitter/egui_commonmark) * [egui\_commonmark\_backend](//github.com/lampsitter/egui_commonmark) * [egui\_extras](//github.com/emilk/egui) * [egui\_glow](//github.com/emilk/egui/tree/master/crates/egui_glow) * [egui\_plot](//github.com/emilk/egui) * [egui\_tiles](//github.com/rerun-io/egui_tiles) * [ehttp](//github.com/emilk/ehttp) * [either](//github.com/bluss/either) * [elliptic-curve](//github.com/RustCrypto/traits/tree/master/elliptic-curve) * [emath](//github.com/emilk/egui/tree/master/crates/emath) * [encoding\_rs](//github.com/hsivonen/encoding_rs) * [enum-as-inner](//github.com/bluejekyll/enum-as-inner) * [enum-map-derive](//codeberg.org/xfix/enum-map) * [enum-map](//codeberg.org/xfix/enum-map) * [enum\_dispatch](//gitlab.com/antonok/enum_dispatch) * [enumflags2](//github.com/meithecatte/enumflags2) * [enumflags2\_derive](//github.com/meithecatte/enumflags2) * [enumn](//github.com/dtolnay/enumn) * [enumset](//github.com/Lymia/enumset) * [enumset\_derive](//github.com/Lymia/enumset) * [env\_logger](//github.com/rust-cli/env_logger) * [equivalent](//github.com/cuviper/equivalent) * [errno](//github.com/lambda-fairy/rust-errno) * [error-chain](//github.com/rust-lang-nursery/error-chain) * [ethnum](//github.com/nlordell/ethnum-rs) * [event-listener-strategy](//github.com/smol-rs/event-listener) * [event-listener](//github.com/smol-rs/event-listener) * [ewebsock](//github.com/rerun-io/ewebsock) * [fastrand](//github.com/smol-rs/fastrand) * [fdeflate](//github.com/image-rs/fdeflate) * [ff](//github.com/zkcrypto/ff) * [filetime](//github.com/alexcrichton/filetime) * [fixed](//gitlab.com/tspiteri/fixed) * [fixedbitset](//github.com/petgraph/fixedbitset) * [flate2](//github.com/rust-lang/flate2-rs) * [flume](//github.com/zesterer/flume) * [fnv](//github.com/servo/rust-fnv) * [fs-err](//github.com/andrewhickman/fs-err) * [form\_urlencoded](//github.com/servo/rust-url) * [futures-channel](//github.com/rust-lang/futures-rs) * [futures-core](//github.com/rust-lang/futures-rs) * [futures-executor](//github.com/rust-lang/futures-rs) * [futures-io](//github.com/rust-lang/futures-rs) * [futures-lite](//github.com/smol-rs/futures-lite) * [futures-macro](//github.com/rust-lang/futures-rs) * [futures-sink](//github.com/rust-lang/futures-rs) * [futures-task](//github.com/rust-lang/futures-rs) * [futures-timer](//github.com/rust-lang/futures-rs) * [futures-util](//github.com/rust-lang/futures-rs) * [futures](//github.com/rust-lang/futures-rs) * [geo-types](//github.com/georust/geo) * [gethostname](//github.com/lunaryorn/gethostname.rs) * [getrandom](//github.com/rust-random/getrandom) * [ghash](//github.com/RustCrypto/universal-hashes) * [gif](//github.com/image-rs/image-gif) * [gimli](//github.com/gimli-rs/gimli) * [glam](//github.com/bitshifter/glam-rs) * [glow](//github.com/grovesNL/glow) * [gltf-derive](//github.com/gltf-rs/gltf) * [gltf-json](//github.com/gltf-rs/gltf) * [gltf](//github.com/gltf-rs/gltf) * [gpu-alloc-types](//github.com/zakarumych/gpu-alloc) * [gpu-alloc](//github.com/zakarumych/gpu-alloc) * [gpu-descriptor-types](//github.com/zakarumych/gpu-descriptor) * [gpu-descriptor](//github.com/zakarumych/gpu-descriptor) * [group](//github.com/zkcrypto/group) * [half](//github.com/starkat99/half-rs) * [hash\_hasher](//github.com/Fraser999/Hash-Hasher.git) * [hash32](//github.com/japaric/hash32) * [hashbrown](//github.com/rust-lang/hashbrown) * [heapless](//github.com/rust-embedded/heapless) * [heck](//github.com/withoutboats/heck) * [hex](//github.com/KokaKiwi/rust-hex) * [hexasphere](//github.com/OptimisticPeach/hexasphere.git) * [hkdf](//github.com/RustCrypto/KDFs/) * [hmac](//github.com/RustCrypto/MACs) * [home](//github.com/rust-lang/cargo) * [http-cache-reqwest](//github.com/06chaynes/http-cache) * [http-cache](//github.com/06chaynes/http-cache) * [http-client](//github.com/http-rs/http-client) * [http-serde](//gitlab.com/kornelski/http-serde) * [http-types](//github.com/http-rs/http-types) * [http](//github.com/hyperium/http) * [httparse](//github.com/seanmonstar/httparse) * [httpdate](//github.com/pyfisch/httpdate) * [humantime](//github.com/tailhook/humantime) * [hyper-rustls](//github.com/rustls/hyper-rustls) * [iana-time-zone](//github.com/strawlab/iana-time-zone) * [ident\_case](//github.com/TedDriggs/ident_case) * [idna](//github.com/servo/rust-url/) * [indexmap](//github.com/bluss/indexmap) * [inout](//github.com/RustCrypto/utils) * [io-lifetimes](//github.com/sunfishcode/io-lifetimes) * [ipnet](//github.com/krisprice/ipnet) * [ipnetwork](//github.com/achanda/ipnetwork) * [itertools](//github.com/rust-itertools/itertools) * [itoa](//github.com/dtolnay/itoa) * [jpeg-decoder](//github.com/image-rs/jpeg-decoder) * [js-sys](//github.com/rustwasm/wasm-bindgen/tree/master/crates/js-sys) * [keccak](//github.com/RustCrypto/sponges/tree/master/keccak) * [kdtree](//github.com/mrhooray/kdtree-rs) * [khronos-egl](//github.com/timothee-haudebourg/khronos-egl) * [kurbo](//github.com/linebender/kurbo) * [kv-log-macro](//github.com/yoshuawuyts/kv-log-macro) * [lazy\_static](//github.com/rust-lang-nursery/lazy-static.rs) * [libc](//github.com/rust-lang/libc) * [libm](//github.com/rust-lang/libm) * [libnghttp2-sys](//github.com/alexcrichton/nghttp2-rs) * [libtbb](//github.com/oneapi-src/oneTBB) * [libz-sys](//github.com/rust-lang/libz-sys) * [linfa-clustering](//github.com/rust-ml/linfa/) * [linfa-linalg](//github.com/rust-ml/linfa-linalg) * [linfa-nn](//github.com/rust-ml/linfa/) * [linfa](//github.com/rust-ml/linfa) * [linked-hash-map](//github.com/contain-rs/linked-hash-map) * [litrs](//github.com/LukasKalbertodt/litrs/) * [lock\_api](//github.com/Amanieu/parking_lot) * [log-once](//github.com/Luthaf/log-once) * [log](//github.com/rust-lang/log) * [loole](//github.com/mahdi-shojaee/loole) * [maplit](//github.com/bluss/maplit) * [matrixmultiply](//github.com/bluss/matrixmultiply/) * [md5](//github.com/stainless-steel/md5) * [memmap2](//github.com/RazrFalcon/memmap2-rs) * [memory-stats](//github.com/Arc-blroth/memory-stats) * [mime](//github.com/hyperium/mime) * [minimal-lexical](//github.com/Alexhuszagh/minimal-lexical) * [miniz\_oxide](//github.com/Frommi/miniz_oxide/tree/master/miniz_oxide) * [naga](//github.com/gfx-rs/wgpu/tree/trunk/naga) * [nalgebra-macros](//github.com/dimforge/nalgebra) * [nalgebra-sparse](//github.com/dimforge/nalgebra) * [nalgebra](//github.com/dimforge/nalgebra) * [ndarray-rand](//github.com/rust-ndarray/ndarray) * [ndarray-stats](//github.com/rust-ndarray/ndarray-stats) * [ndarray](//github.com/rust-ndarray/ndarray) * [nohash-hasher](//github.com/paritytech/nohash-hasher) * [noisy\_float](//github.com/SergiusIW/noisy_float-rs) * [num-bigint](//github.com/rust-num/num-bigint) * [num-bigint-dig](//github.com/dignifiedquire/num-bigint) * [num-complex](//github.com/rust-num/num-complex) * [num-derive](//github.com/rust-num/num-derive) * [num-integer](//github.com/rust-num/num-integer) * [num-iter](//github.com/rust-num/num-iter) * [num-rational](//github.com/rust-num/num-rational) * [num-traits](//github.com/rust-num/num-traits) * [num](//github.com/rust-num/num) * [num\_cpus](//github.com/seanmonstar/num_cpus) * [num\_threads](//github.com/jhpratt/num_threads) * [object](//github.com/gimli-rs/object) * [oid-registry](//github.com/rusticata/oid-registry.git) * [once\_cell](//github.com/matklad/once_cell) * [opaque-debug](//github.com/RustCrypto/utils) * [opencv](//github.com/opencv/opencv) * [openssl-probe](//github.com/alexcrichton/openssl-probe) * [order-stat](//github.com/huonw/order-stat) * [ordered-stream](//github.com/danieldg/ordered-stream) * [owned\_ttf\_parser](//github.com/alexheretic/owned-ttf-parser) * [p256](//github.com/RustCrypto/elliptic-curves/tree/master/p256) * [p384](//github.com/RustCrypto/elliptic-curves/tree/master/p384) * [parking](//github.com/smol-rs/parking) * [parking\_lot](//github.com/Amanieu/parking_lot) * [parking\_lot\_core](//github.com/Amanieu/parking_lot) * [paste](//github.com/dtolnay/paste) * [pem-rfc7468](//github.com/RustCrypto/formats/tree/master/pem-rfc7468) * [percent-encoding](//github.com/servo/rust-url/) * [petgraph](//github.com/petgraph/petgraph) * [pin-project-internal](//github.com/taiki-e/pin-project) * [pin-project-lite](//github.com/taiki-e/pin-project-lite) * [pin-project](//github.com/taiki-e/pin-project) * [pin-utils](//github.com/rust-lang-nursery/pin-utils) * [piper](//github.com/notgull/piper) * [pkcs1](//github.com/RustCrypto/formats/tree/master/pkcs1) * [pkcs8](//github.com/RustCrypto/formats/tree/master/pkcs8) * [planus](//github.com/planus-org/planus) * [png](//github.com/image-rs/image-png) * [pnet\_base](//github.com/libpnet/libpnet) * [pnet\_datalink](//github.com/libpnet/libpnet) * [pnet\_sys](//github.com/libpnet/libpnet) * [poll-promise](//github.com/EmbarkStudios/poll-promise) * [polling](//github.com/smol-rs/polling) * [pollster](//github.com/zesterer/pollster) * [polyval](//github.com/RustCrypto/universal-hashes) * [portable-atomic](//github.com/taiki-e/portable-atomic) * [powerfmt](//github.com/jhpratt/powerfmt) * [ppv-lite86](//github.com/cryptocorrosion/cryptocorrosion) * [primal-check](//github.com/huonw/primal) * [primeorder](//github.com/RustCrypto/elliptic-curves/tree/master/primeorder) * [proc-macro-crate](//github.com/bkchr/proc-macro-crate) * [proc-macro-hack](//github.com/dtolnay/proc-macro-hack) * [proc-macro2](//github.com/alexcrichton/proc-macro2) * [profiling-procmacros](//github.com/aclysma/profiling) * [profiling](//github.com/aclysma/profiling) * [proptest-derive](//github.com/AltSysrq/proptest) * [proptest](//github.com/proptest-rs/proptest) * [puffin](//github.com/EmbarkStudios/puffin) * [puffin\_http](//github.com/EmbarkStudios/puffin) * [qoi](//github.com/aldanor/qoi-rust) * [quick-error](//github.com/tailhook/quick-error) * [quinn](//github.com/quinn-rs/quinn) * [quinn-proto](//github.com/quinn-rs/quinn) * [quinn-udp](//github.com/quinn-rs/quinn) * [quote](//github.com/dtolnay/quote) * [rand](//github.com/rust-random/rand) * [rand\_chacha](//github.com/rust-random/rand) * [rand\_core](//github.com/rust-random/rand) * [rand\_distr](//github.com/rust-random/rand_distr) * [rand\_xorshift](//github.com/rust-random/rngs) * [rand\_xoshiro](//github.com/rust-random/rngs) * [raw-window-handle](//github.com/rust-windowing/raw-window-handle) * [rawpointer](//github.com/bluss/rawpointer/) * [rayon-core](//github.com/rayon-rs/rayon) * [rayon](//github.com/rayon-rs/rayon) * [re\_analytics](//github.com/rerun-io/rerun) * [re\_blueprint\_tree](//github.com/rerun-io/rerun) * [re\_build\_info](//github.com/rerun-io/rerun) * [re\_case](//github.com/rerun-io/rerun) * [re\_chunk](//github.com/rerun-io/rerun) * [re\_chunk\_store](//github.com/rerun-io/rerun) * [re\_component\_ui](//github.com/rerun-io/rerun) * [re\_context\_menu](//github.com/rerun-io/rerun) * [re\_crash\_handler](//github.com/rerun-io/rerun) * [re\_data\_loader](//github.com/rerun-io/rerun) * [re\_data\_source](//github.com/rerun-io/rerun) * [re\_data\_ui](//github.com/rerun-io/rerun) * [re\_entity\_db](//github.com/rerun-io/rerun) * [re\_error](//github.com/rerun-io/rerun) * [re\_format](//github.com/rerun-io/rerun) * [re\_format\_arrow](//github.com/rerun-io/rerun) * [re\_int\_histogram](//github.com/rerun-io/rerun) * [re\_log](//github.com/rerun-io/rerun) * [re\_log\_encoding](//github.com/rerun-io/rerun) * [re\_log\_types](//github.com/rerun-io/rerun) * [re\_math](//github.com/rerun-io/rerun) * [re\_memory](//github.com/rerun-io/rerun) * [re\_query](//github.com/rerun-io/rerun) * [re\_renderer](//github.com/rerun-io/rerun) * [re\_sdk](//github.com/rerun-io/rerun) * [re\_sdk\_comms](//github.com/rerun-io/rerun) * [re\_selection\_panel](//github.com/rerun-io/rerun) * [re\_smart\_channel](//github.com/rerun-io/rerun) * [re\_space\_view](//github.com/rerun-io/rerun) * [re\_space\_view\_bar\_chart](//github.com/rerun-io/rerun) * [re\_space\_view\_dataframe](//github.com/rerun-io/rerun) * [re\_space\_view\_spatial](//github.com/rerun-io/rerun) * [re\_space\_view\_tensor](//github.com/rerun-io/rerun) * [re\_space\_view\_text\_document](//github.com/rerun-io/rerun) * [re\_space\_view\_text\_log](//github.com/rerun-io/rerun) * [re\_space\_view\_time\_series](//github.com/rerun-io/rerun) * [re\_string\_interner](//github.com/rerun-io/rerun) * [re\_time\_panel](//github.com/rerun-io/rerun) * [re\_tracing](//github.com/rerun-io/rerun) * [re\_tuid](//github.com/rerun-io/rerun) * [re\_types](//github.com/rerun-io/rerun) * [re\_types\_blueprint](//github.com/rerun-io/rerun) * [re\_types\_core](//github.com/rerun-io/rerun) * [re\_viewer](//github.com/rerun-io/rerun) * [re\_viewer\_context](//github.com/rerun-io/rerun) * [re\_viewport\_blueprint](//github.com/rerun-io/rerun) * [re\_web\_viewer\_server](//github.com/rerun-io/rerun) * [re\_ws\_comms](//github.com/rerun-io/rerun) * [realsense-rust](//gitlab.com/tangram-vision-oss/realsense-rust) * [realsense-sys](//gitlab.com/tangram-vision-oss/realsense-rust) * [ref-cast](//github.com/dtolnay/ref-cast) * [ref-cast-impl](//github.com/dtolnay/ref-cast) * [reflink-copy](//github.com/cargo-bins/reflink-copy) * [regex-automata](//github.com/rust-lang/regex/tree/master/regex-automata) * [regex-syntax](//github.com/rust-lang/regex/tree/master/regex-syntax) * [regex](//github.com/rust-lang/regex) * [renderdoc-sys](//github.com/ebkalderon/renderdoc-rs) * [reqwest-middleware](//github.com/TrueLayer/reqwest-middleware) * [reqwest](//github.com/seanmonstar/reqwest) * [rerun](//github.com/rerun-io/rerun) * [ring-compat](//github.com/RustCrypto/ring-compat) * [ron](//github.com/ron-rs/ron) * [roxmltree](//github.com/RazrFalcon/roxmltree) * [rsa](//github.com/RustCrypto/RSA) * [rstar](//github.com/georust/rstar) * [rustc-demangle](//github.com/alexcrichton/rustc-demangle) * [rustc-hash](//github.com/rust-lang/rustc-hash) * [rusticata-macros](//github.com/rusticata/rusticata-macros.git) * [rustfft](//github.com/ejmahler/RustFFT) * [rustls-native-certs](//github.com/rustls/rustls-native-certs) * [rustls-pemfile](//github.com/rustls/pemfile) * [rustls-platform-verifier](//github.com/rustls/rustls-platform-verifier) * [rustls](//github.com/rustls/rustls) * [rustversion](//github.com/dtolnay/rustversion) * [rusty-fork](//github.com/altsysrq/rusty-fork) * [ryu](//github.com/dtolnay/ryu) * [safe\_arch](//github.com/Lokathor/safe_arch) * [scoped-tls](//github.com/alexcrichton/scoped-tls) * [scopeguard](//github.com/bluss/scopeguard) * [sct](//github.com/rustls/sct.rs) * [sec1](//github.com/RustCrypto/formats/tree/master/sec1) * [secrecy](//github.com/iqlusioninc/crates/tree/main/secrecy) * [seq-macro](//github.com/dtolnay/seq-macro) * [serde-big-array](https://github.com/est31/serde-big-array) * [serde](//github.com/serde-rs/serde) * [serde\_bytes](//github.com/serde-rs/bytes) * [serde\_derive](//github.com/serde-rs/serde) * [serde\_derive\_internals](//github.com/serde-rs/serde) * [serde\_json](//github.com/serde-rs/json) * [serde\_qs](//github.com/samscott89/serde_qs) * [serde\_repr](//github.com/dtolnay/serde-repr) * [serde\_spanned](//github.com/toml-rs/toml) * [serde\_urlencoded](//github.com/nox/serde_urlencoded) * [serde\_with](//github.com/jonasbb/serde_with/) * [serde\_with\_macros](//github.com/jonasbb/serde_with/) * [serde\_yaml](//github.com/dtolnay/serde-yaml) * [sha-1](//github.com/RustCrypto/hashes) * [sha1](//github.com/RustCrypto/hashes) * [sha2](//github.com/RustCrypto/hashes) * [sha3](//github.com/RustCrypto/hashes) * [shellexpand](//gitlab.com/ijackson/rust-shellexpand) * [signal-hook-registry](//github.com/vorner/signal-hook) * [signature](//github.com/RustCrypto/traits/tree/master/signature) * [simba](//github.com/dimforge/simba) * [simdutf8](//github.com/rusticstuff/simdutf8) * [similar-asserts](//github.com/mitsuhiko/similar-asserts) * [similar](//github.com/mitsuhiko/similar) * [simplecss](//github.com/linebender/simplecss) * [simple-error](//github.com/WiSaGaN/simple-error.git) * [siphasher](//github.com/jedisct1/rust-siphash) * [smallvec](//github.com/servo/rust-smallvec) * [smol\_str](//github.com/rust-analyzer/smol_str) * [socket2](//github.com/rust-lang/socket2) * [spinning\_top](//github.com/rust-osdev/spinning_top) * [spirv](//github.com/gfx-rs/rspirv) * [spki](//github.com/RustCrypto/formats/tree/master/spki) * [sprs](//github.com/sparsemat/sprs) * [strip-ansi-escapes](//github.com/luser/strip-ansi-escapes) * [ssri](//github.com/zkat/ssri-rs) * [stable\_deref\_trait](//github.com/storyyeller/stable_deref_trait) * [standback](//github.com/jhpratt/standback) * [static\_assertions](//github.com/nvzqz/static-assertions-rs) * [strength\_reduce](//github.com/ejmahler/strength_reduce) * [sublime\_fuzzy](//github.com/Schlechtwetterfront/fuzzy-rs) * [surf](//github.com/http-rs/surf) * [svgtypes](//github.com/RazrFalcon/svgtypes) * [syn](//github.com/dtolnay/syn) * [sync\_wrapper](//github.com/Actyx/sync_wrapper) * [task-local-extensions](//github.com/TrueLayer/task-local-extensions) * [tempfile](//github.com/Stebalien/tempfile) * [terminal\_size](//github.com/eminence/terminal-size) * [thiserror-impl](//github.com/dtolnay/thiserror) * [thiserror](//github.com/dtolnay/thiserror) * [time-core](//github.com/time-rs/time) * [time-macros-impl](//github.com/time-rs/time) * [time-macros](//github.com/time-rs/time) * [time](//github.com/time-rs/time) * [tiny-http](//github.com/tiny-http/tiny-http) * [tinyvec](//github.com/Lokathor/tinyvec) * [tinyvec\_macros](//github.com/Soveu/tinyvec_macros) * [tls-listener](//github.com/tmccombs/tls-listener) * [tokio-rustls](//github.com/rustls/tokio-rustls) * [toml](//github.com/toml-rs/toml) * [toml\_datetime](//github.com/toml-rs/toml) * [toml\_edit](//github.com/toml-rs/toml) * [toml\_parser](//github.com/toml-rs/toml) * [toml\_writer](//github.com/toml-rs/toml) * [transpose](//github.com/ejmahler/transpose) * [ttf-parser](//github.com/RazrFalcon/ttf-parser) * [tungstenite](//github.com/snapview/tungstenite-rs) * [type-map](//github.com/kardeiz/type-map) * [typenum](//github.com/paholg/typenum) * [unarray](//github.com/cameron1024/unarray) * [unicase](//github.com/seanmonstar/unicase) * [unicode-bidi](//github.com/servo/unicode-bidi) * [unicode-normalization](//github.com/unicode-rs/unicode-normalization) * [unicode-segmentation](//github.com/unicode-rs/unicode-segmentation) * [unicode-width](//github.com/unicode-rs/unicode-width) * [unicode-xid](//github.com/unicode-rs/unicode-xid) * [uhlc](//github.com/atolab/uhlc-rs) * [unindent](//github.com/dtolnay/indoc) * [universal-hash](//github.com/RustCrypto/traits) * [unzip-n](//github.com/mexus/unzip-n) * [urdf-rs](//github.com/openrr/urdf-rs) * [ureq](//github.com/algesten/ureq) * [url](//github.com/servo/rust-url) * [utf-8](//github.com/SimonSapin/rust-utf8) * [utf8parse](//github.com/alacritty/vte) * [uuid](//github.com/uuid-rs/uuid) * [value-bag](//github.com/sval-rs/value-bag) * [vec1](//github.com/rustonaut/vec1/) * [vec\_map](//github.com/contain-rs/vec-map) * [vte](//github.com/alacritty/vte) * [wait-timeout](//github.com/alexcrichton/wait-timeout) * [waker-fn](//github.com/smol-rs/waker-fn) * [wasm-bindgen-backend](//github.com/rustwasm/wasm-bindgen/tree/master/crates/backend) * [wasm-bindgen-futures](//github.com/rustwasm/wasm-bindgen/tree/master/crates/futures) * [wasm-bindgen-macro-support](//github.com/rustwasm/wasm-bindgen/tree/master/crates/macro-support) * [wasm-bindgen-macro](//github.com/rustwasm/wasm-bindgen/tree/master/crates/macro) * [wasm-bindgen-shared](//github.com/rustwasm/wasm-bindgen/tree/master/crates/shared) * [wasm-bindgen](//github.com/rustwasm/wasm-bindgen) * [web-sys](//github.com/rustwasm/wasm-bindgen/tree/master/crates/web-sys) * [web-time](//github.com/daxpedda/web-time) * [webbrowser](//github.com/amodm/webbrowser-rs) * [weezl](//github.com/image-rs/lzw) * [wgpu-core](//github.com/gfx-rs/wgpu) * [wgpu-hal](//github.com/gfx-rs/wgpu) * [wgpu-types](//github.com/gfx-rs/wgpu) * [wgpu](//github.com/gfx-rs/wgpu) * [wide](//github.com/Lokathor/wide) * [winit](//github.com/rust-windowing/winit) * [x11rb-protocol](//github.com/psychon/x11rb) * [x11rb](//github.com/psychon/x11rb) * [x509-parser](//github.com/rusticata/x509-parser.git) * [xkeysym](//github.com/notgull/xkeysym) * [zenoh](//github.com/eclipse-zenoh/zenoh) * [zenoh-buffers](//github.com/eclipse-zenoh/zenoh) * [zenoh-codec](//github.com/eclipse-zenoh/zenoh) * [zenoh-collections](//github.com/eclipse-zenoh/zenoh) * [zenoh-config](//github.com/eclipse-zenoh/zenoh) * [zenoh-core](//github.com/eclipse-zenoh/zenoh) * [zenoh-crypto](//github.com/eclipse-zenoh/zenoh) * [zenoh-keyexpr](//github.com/eclipse-zenoh/zenoh) * [zenoh-link](//github.com/eclipse-zenoh/zenoh) * [zenoh-link-commons](//github.com/eclipse-zenoh/zenoh) * [zenoh-link-quic](//github.com/eclipse-zenoh/zenoh) * [zenoh-link-quic\_datagram](//github.com/eclipse-zenoh/zenoh) * [zenoh-link-tcp](//github.com/eclipse-zenoh/zenoh) * [zenoh-link-tls](//github.com/eclipse-zenoh/zenoh) * [zenoh-link-udp](//github.com/eclipse-zenoh/zenoh) * [zenoh-link-unixsock\_stream](//github.com/eclipse-zenoh/zenoh) * [zenoh-link-ws](//github.com/eclipse-zenoh/zenoh) * [zenoh-macros](//github.com/eclipse-zenoh/zenoh) * [zenoh-plugin-trait](//github.com/eclipse-zenoh/zenoh) * [zenoh-protocol](//github.com/eclipse-zenoh/zenoh) * [zenoh-result](//github.com/eclipse-zenoh/zenoh) * [zenoh-runtime](//github.com/eclipse-zenoh/zenoh) * [zenoh-sync](//github.com/eclipse-zenoh/zenoh) * [zenoh-task](//github.com/eclipse-zenoh/zenoh) * [zenoh-transport](//github.com/eclipse-zenoh/zenoh) * [zenoh-util](//github.com/eclipse-zenoh/zenoh) * [zerocopy](//github.com/google/zerocopy) * [zeroize](//github.com/RustCrypto/utils/tree/master/zeroize) * [zstd-sys](//github.com/gyscos/zstd-rs) * [zune-core](//github.com/etemesi254/zune-image) * [zune-inflate](//github.com/etemesi254/zune-image) * [zune-jpeg](//github.com/etemesi254/zune-image/tree/dev/zune-jpeg) 📋 Original Apache 2.0 License ``` Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ### Apache-2.0 and OFL-1.1 / UFL-1.0[​](#apache-20-and-ofl-11--ufl-10 "Direct link to Apache-2.0 and OFL-1.1 / UFL-1.0") 🔍 3rd party code licensed as Apache 2.0 and packaging fonts via the OFL / UFL * [epaint](//github.com/emilk/egui/tree/master/crates/epaint) * [re\_ui](//github.com/rerun-io/rerun) 📋 Original OFL 1.1 License Text ``` PREAMBLE The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others. The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives. DEFINITIONS “Font Software” refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation. “Reserved Font Name” refers to any names specified as such after the copyright statement(s). “Original Version” refers to the collection of Font Software components as distributed by the Copyright Holder(s). “Modified Version” refers to any derivative made by adding to, deleting, or substituting – in part or in whole – any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment. “Author” refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software. PERMISSION & CONDITIONS Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions: 1) Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself. 2) Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user. 3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users. 4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission. 5) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software. TERMINATION This license becomes null and void if any of the above conditions are not met. DISCLAIMER THE FONT SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE. ``` 📋 Original UFL 1.1 License Text ``` UBUNTU FONT LICENCE Version 1.0 PREAMBLE This licence allows the licensed fonts to be used, studied, modified and redistributed freely. The fonts, including any derivative works, can be bundled, embedded, and redistributed provided the terms of this licence are met. The fonts and derivatives, however, cannot be released under any other licence. The requirement for fonts to remain under this licence does not require any document created using the fonts or their derivatives to be published under this licence, as long as the primary purpose of the document is not to be a vehicle for the distribution of the fonts. DEFINITIONS "Font Software" refers to the set of files released by the Copyright Holder(s) under this licence and clearly marked as such. This may include source files, build scripts and documentation. "Original Version" refers to the collection of Font Software components as received under this licence. "Modified Version" refers to any derivative made by adding to, deleting, or substituting -- in part or in whole -- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment. "Copyright Holder(s)" refers to all individuals and companies who have a copyright ownership of the Font Software. "Substantially Changed" refers to Modified Versions which can be easily identified as dissimilar to the Font Software by users of the Font Software comparing the Original Version with the Modified Version. To "Propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification and with or without charging a redistribution fee), making available to the public, and in some countries other activities as well. PERMISSION & CONDITIONS This licence does not grant any rights under trademark law and all such rights are reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to propagate the Font Software, subject to the below conditions: 1) Each copy of the Font Software must contain the above copyright notice and this licence. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine- readable metadata fields within text or binary files as long as those fields can be easily viewed by the user. 2) The font name complies with the following: (a) The Original Version must retain its name, unmodified. (b) Modified Versions which are Substantially Changed must be renamed to avoid use of the name of the Original Version or similar names entirely. (c) Modified Versions which are not Substantially Changed must be renamed to both (i) retain the name of the Original Version and (ii) add additional naming elements to distinguish the Modified Version from the Original Version. The name of such Modified Versions must be the name of the Original Version, with "derivative X" where X represents the name of the new work, appended to that name. 3) The name(s) of the Copyright Holder(s) and any contributor to the Font Software shall not be used to promote, endorse or advertise any Modified Version, except (i) as required by this licence, (ii) to acknowledge the contribution(s) of the Copyright Holder(s) or (iii) with their explicit written permission. 4) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this licence, and must not be distributed under any other licence. The requirement for fonts to remain under this licence does not affect any document created using the Font Software, except any version of the Font Software extracted from a document created using the Font Software may only be distributed under this licence. TERMINATION This licence becomes null and void if any of the above conditions are not met. DISCLAIMER THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE. ``` ### Apache-2.0 and Unicode-DFS-2016[​](#apache-20-and-unicode-dfs-2016 "Direct link to Apache-2.0 and Unicode-DFS-2016") 🔍 3rd party code licensed as Apache 2.0 and Unicode DFS 2016 * [unicode-ident](//github.com/dtolnay/unicode-ident) 📋 Original Unicode DFS 2016 License ``` UNICODE LICENSE V3 COPYRIGHT AND PERMISSION NOTICE Copyright © 1991-2024 Unicode, Inc. NOTICE TO USER: Carefully read the following legal agreement. BY DOWNLOADING, INSTALLING, COPYING OR OTHERWISE USING DATA FILES, AND/OR SOFTWARE, YOU UNEQUIVOCALLY ACCEPT, AND AGREE TO BE BOUND BY, ALL OF THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE, DO NOT DOWNLOAD, INSTALL, COPY, DISTRIBUTE OR USE THE DATA FILES OR SOFTWARE. Permission is hereby granted, free of charge, to any person obtaining a copy of data files and any associated documentation (the "Data Files") or software and any associated documentation (the "Software") to deal in the Data Files or Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and/or sell copies of the Data Files or Software, and to permit persons to whom the Data Files or Software are furnished to do so, provided that either (a) this copyright and permission notice appear with all copies of the Data Files or Software, or (b) this copyright and permission notice appear in associated Documentation. THE DATA FILES AND SOFTWARE ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THE DATA FILES OR SOFTWARE. Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to promote the sale, use or other dealings in these Data Files or Software without prior written authorization of the copyright holder. ``` ### MIT License[​](#mit-license "Direct link to MIT License") 🔍 3rd party code licensed with MIT * [aho-corasick](//github.com/BurntSushi/aho-corasick) * [aligned-vec](//github.com/sarah-ek/aligned-vec) * [ansi-str](//github.com/zhiburt/ansi-str) * [ansi-to-html](//github.com/Aloso/to-html) * [ansitok](//github.com/zhiburt/ansitok) * [arg\_enum\_proc\_macro](//github.com/lu-zero/arg_enum_proc_macro) * [ashpd](//github.com/bilelmoussaoui/ashpd) * [atty](//github.com/softprops/atty) * [bincode](//github.com/servo/bincode) * [binrw](//github.com/jam1garner/binrw) * [binrw\_derive](//github.com/jam1garner/binrw) * [byteorder](//github.com/BurntSushi/byteorder) * [bytes](//github.com/tokio-rs/bytes) * [calloop-wayland-source](//github.com/smithay/calloop-wayland-source) * [calloop](//github.com/Smithay/calloop) * [cfb](//github.com/mdsteele/rust-cfb) * [color\_quant](//github.com/image-rs/color_quant.git) * [comfy-table](//github.com/nukesor/comfy-table) * [console](//github.com/console-rs/console) * [conv](//github.com/DanielKeep/rust-conv) * [convert\_case](//github.com/rutrum/convert-case) * [crossterm](//github.com/crossterm-rs/crossterm) * [crunchy](//github.com/eira-fransham/crunchy) * [darling](//github.com/TedDriggs/darling) * [darling\_core](//github.com/TedDriggs/darling) * [darling\_macro](//github.com/TedDriggs/darling) * [derive-getters](//git.sr.ht/~kvsari/derive-getters) * [derive\_more](//github.com/JelteF/derive_more) * [dlib](//github.com/elinorbgr/dlib) * [envy](//github.com/softprops/envy) * [faer-entity](//github.com/sarah-ek/faer-rs/) * [faer-macros](//github.com/sarah-quinones/faer-rs/) * [faer-traits](//github.com/sarah-quinones/faer-rs/) * [faer](//github.com/sarah-ek/faer-rs/) * [ffmpeg-sidecar](//github.com/nathanbabcock/ffmpeg-sidecar) * [float-cmp](//github.com/mikedilger/float-cmp) * [foreign\_vec](//github.com/DataEngineeringLabs/foreign_vec) * [generic-array](//github.com/fizyk20/generic-array.git) * [h2](//github.com/hyperium/h2) * [http-body](//github.com/hyperium/http-body) * [hyper](//github.com/hyperium/hyper) * [image-webp](//github.com/image-rs/image-webp) * [image](//github.com/image-rs/image) * [imagesize](//github.com/Roughsketch/imagesize) * [imgref](//github.com/kornelski/imgref) * [indicatif](//github.com/console-rs/indicatif) * [infer](//github.com/bojand/infer) * [inflections](//docs.rs/inflections) * [is-terminal](//github.com/sunfishcode/is-terminal) * [isahc](//github.com/sagebind/isahc) * [libmimalloc-sys](//github.com/purpleprotocol/mimalloc_rust/tree/master/libmimalloc-sys) * [loop9](//gitlab.com/kornelski/loop9.git) * [lru](//github.com/jeromefroe/lru-rs) * [lz4-sys](//github.com/10xGenomics/lz4-rs) * [lz4](//github.com/10xGenomics/lz4-rs) * [lz4\_flex](//github.com/pseitz/lz4_flex) * [maybe\_rayon](//github.com/shssoichiro/maybe-rayon) * [mcap](//github.com/foxglove/mcap) * [memchr](//github.com/BurntSushi/memchr) * [memoffset](//github.com/Gilnaa/memoffset) * [miette-derive](//github.com/zkat/miette) * [miette](//github.com/zkat/miette) * [mimalloc](//github.com/purpleprotocol/mimalloc_rust) * [mime\_guess](//github.com/abonander/mime_guess) * [mime\_guess2](//github.com/ttys3/mime_guess2) * [mio](//github.com/tokio-rs/mio) * [natord](//github.com/lifthrasiir/rust-natord) * [new\_debug\_unreachable](//github.com/mbrubeck/rust-debug-unreachable) * [nix](//github.com/nix-rust/nix) * [no-std-net](//github.com/dunmatt/no-std-net) * [nom](//github.com/Geal/nom) * [nonempty-collections](//github.com/fosskers/nonempty-collections) * [noop\_proc\_macro](//github.com/lu-zero/noop_proc_macro) * [number\_prefix](//github.com/ogham/rust-number-prefix) * [opencv](//github.com/twistedfall/opencv-rust) * [openssl-sys](//github.com/sfackler/rust-openssl) * [ordered-float](//github.com/reem/rust-ordered-float) * [owo-colors](//github.com/jam1garner/owo-colors) * [pcd-rs-derive](//github.com/jerry73204/pcd-rs) * [pcd-rs](//github.com/jerry73204/pcd-rs) * [peg-macros](//github.com/kevinmehall/rust-peg) * [peg-runtime](//github.com/kevinmehall/rust-peg) * [peg](//github.com/kevinmehall/rust-peg) * [phf](//github.com/rust-phf/rust-phf) * [phf\_generator](//github.com/rust-phf/rust-phf) * [phf\_macros](//github.com/rust-phf/rust-phf) * [phf\_shared](//github.com/rust-phf/rust-phf) * [pico-args](//github.com/RazrFalcon/pico-args) * [platform-dirs](//github.com/cjbassi/platform-dirs-rs) * [ply-rs](//github.com/Fluci/ply-rs.git) * [private-gemm-x86](//github.com/sarah-quinones/gemm-x64-v2/) * [pulldown-cmark](//github.com/raphlinus/pulldown-cmark) * [qd](//github.com/sarah-quinones/qd/) * [quick-xml](//github.com/tafia/quick-xml) * [rctree](//github.com/RazrFalcon/rctree) * [regex-automata](//github.com/BurntSushi/regex-automata) * [resvg](//github.com/RazrFalcon/resvg) * [rfd](//github.com/PolyMeilex/rfd) * [rgb](//github.com/kornelski/rust-rgb) * [rmp-serde](//github.com/3Hren/msgpack-rust) * [rmp](//github.com/3Hren/msgpack-rust) * [roslibrust](//github.com/roslibrust/roslibrust) * [roslibrust\_codegen](//github.com/Carter12s/roslibrust) * [roslibrust\_common](//github.com/roslibrust/roslibrust) * [roslibrust\_serde\_rosmsg](//github.com/adnanademovic/serde_rosmsg) * [same-file](//github.com/BurntSushi/same-file) * [schemars](//github.com/GREsau/schemars) * [schemars\_derive](//github.com/GREsau/schemars) * [serde-wasm-bindgen](//github.com/RReverser/serde-wasm-bindgen) * [simd-adler32](//github.com/mcountryman/simd-adler32) * [simd\_helpers](//github.com/lu-zero/simd_helpers) * [slab](//github.com/tokio-rs/slab) * [sluice](//github.com/sagebind/sluice) * [smart-default](//github.com/idanarye/rust-smart-default) * [smawk](//github.com/mgeisler/smawk) * [smithay-client-toolkit](//github.com/smithay/client-toolkit) * [smithay-clipboard](//github.com/smithay/smithay-clipboard) * [space](//github.com/rust-cv/space) * [spin](//github.com/mvdnes/spin-rs.git) * [spindle](//github.com/sarah-quinones/spindle) * [spinners](//github.com/fgribreau/spinners) * [strict-num](//github.com/RazrFalcon/strict-num) * [strsim](//github.com/dguo/strsim-rs) * [strum](//github.com/Peternator7/strum) * [strum\_macros](//github.com/Peternator7/strum) * [supports-color](//github.com/zkat/supports-color) * [supports-hyperlinks](//github.com/zkat/supports-hyperlinks) * [supports-unicode](//github.com/zkat/supports-unicode) * [sys-info](//github.com/FillZpp/sys-info-rs) * [sysinfo](//github.com/GuillaumeGomez/sysinfo) * [termcolor](//github.com/BurntSushi/termcolor) * [textplots](//github.com/loony-bean/textplots-rs) * [textwrap](//github.com/mgeisler/textwrap) * [tiff](//github.com/image-rs/image-tiff) * [tinystl](//github.com/lsh/tinystl) * [tobj](//github.com/Twinklebear/tobj) * [tokio-macros](//github.com/tokio-rs/tokio) * [tokio-stream](//github.com/tokio-rs/tokio) * [tokio-tungstenite](//github.com/snapview/tokio-tungstenite) * [tokio-util](//github.com/tokio-rs/tokio) * [tokio](//github.com/tokio-rs/tokio) * [tower-service](//github.com/tower-rs/tower) * [tracing-appender](//github.com/tokio-rs/tracing) * [tracing-attributes](//github.com/tokio-rs/tracing) * [tracing-core](//github.com/tokio-rs/tracing) * [tracing-futures](//github.com/tokio-rs/tracing) * [tracing-serde](//github.com/tokio-rs/tracing) * [tracing](//github.com/tokio-rs/tracing) * [try-lock](//github.com/seanmonstar/try-lock) * [twox-hash](//github.com/shepmaster/twox-hash) * [unicode-linebreak](//github.com/axelf4/unicode-linebreak) * [unsafe-libyaml](//github.com/dtolnay/unsafe-libyaml) * [urlencoding](//github.com/kornelski/rust_urlencoding) * [usvg-parser](//github.com/RazrFalcon/resvg) * [usvg-tree](//github.com/RazrFalcon/resvg) * [usvg](//github.com/RazrFalcon/resvg) * [walkdir](//github.com/BurntSushi/walkdir) * [walkers](//github.com/podusowski/walkers) * [want](//github.com/seanmonstar/want) * [wayland-backend](//github.com/smithay/wayland-rs) * [wayland-client](//github.com/smithay/wayland-rs) * [wayland-csd-frame](//github.com/rust-windowing/wayland-csd-frame) * [wayland-cursor](//github.com/smithay/wayland-rs) * [wayland-protocols-plasma](//github.com/smithay/wayland-rs) * [wayland-protocols-wlr](//github.com/smithay/wayland-rs) * [wayland-protocols](//github.com/smithay/wayland-rs) * [wayland-scanner](//github.com/smithay/wayland-rs) * [wayland-sys](//github.com/smithay/wayland-rs) * [wildmatch](//github.com/becheran/wildmatch) * [winnow](//github.com/winnow-rs/winnow) * [x11-dl](//github.com/AltF02/x11-rs.git) * [xcursor](//github.com/esposm03/xcursor-rs) * [xdg-home](//github.com/zeenix/xdg-home) * [xkbcommon-dl](//github.com/rust-windowing/xkbcommon-dl) * [xml-rs](//github.com/kornelski/xml-rs) * [xmlwriter](//github.com/RazrFalcon/xmlwriter) * [yaserde](//github.com/media-io/yaserde) * [yaserde\_derive](//github.com/media-io/yaserde) * [zbus](//github.com/dbus2/zbus/) * [zbus\_macros](//github.com/dbus2/zbus/) * [zbus\_names](//github.com/dbus2/zbus/) * [zstd](//github.com/gyscos/zstd-rs) * [zvariant](//github.com/dbus2/zbus/) * [zvariant\_derive](//github.com/dbus2/zbus/) * [zvariant\_utils](//github.com/dbus2/zbus/) 📋 Original MIT License ``` Copyright (c) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### ISC License[​](#isc-license "Direct link to ISC License") 🔍 3rd party code licensed with ISC * [inotify-sys](//github.com/hannobraun/inotify-sys) * [inotify](//github.com/hannobraun/inotify) * [is\_ci](//github.com/zkat/is_ci) * [json5](//github.com/callum-oakley/json5-rs) * [libloading](//github.com/nagisa/rust_libloading/) * [rustls-webpki](//github.com/rustls/webpki) * [untrusted](//github.com/briansmith/untrusted) 📋 Original ISC License ``` Copyright . Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHORS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ``` ### BSD-2-Clause License[​](#bsd-2-clause-license "Direct link to BSD-2-Clause License") 🔍 3rd party code licensed with BSD-2-Clause * [arrayref](//github.com/droundy/arrayref) * [atomic-wait](//github.com/m-ou-se/atomic-wait) * [av1-grain](//github.com/rust-av/av1-grain) * [git-version](//github.com/fusion-engineering/rust-git-version) * [git-version-macro](//github.com/fusion-engineering/rust-git-version) * [http-cache-semantics](//github.com/kornelski/rusty-http-cache-semantics) * [rav1e](//github.com/xiph/rav1e/) * [re\_rav1d](//github.com/memorysafety/rav1d) * [v\_frame](//github.com/rust-av/v_frame) 📋 Original BSD-2-Clause License ``` BSD 2-Clause License Copyright (c) , All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ### BSD-3-Clause License[​](#bsd-3-clause-license "Direct link to BSD-3-Clause License") 🔍 3rd party code licensed with BSD-3-Clause * [AMD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [COLAMD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [alloc-no-stdlib](//github.com/dropbox/rust-alloc-no-stdlib) * [alloc-stdlib](//github.com/dropbox/rust-alloc-no-stdlib) * [avif-serialize](//github.com/kornelski/avif-serialize) * [cAMD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [cCOLAMD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [encoding\_rs](//github.com/hsivonen/encoding_rs) * [exr](//github.com/johannesvollmer/exrs) * [instant](//github.com/sebcrozet/instant) * [lebe](//github.com/johannesvollmer/lebe) * [nalgebra](//github.com/dimforge/nalgebra) * [never](//fuchsia.googlesource.com/fuchsia/+/master/garnet/lib/rust/never) * [ravif](//github.com/kornelski/cavif-rs) * [subtle](//github.com/dalek-cryptography/subtle) * [tiny-skia](//github.com/RazrFalcon/tiny-skia) * [tiny-skia-path](//github.com/RazrFalcon/tiny-skia/tree/master/path) 📋 Original BSD-3-Clause License ``` BSD 3-Clause License Copyright (c) , All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ### Zlib License[​](#zlib-license "Direct link to Zlib License") 🔍 3rd party code licensed with Zlib * [const\_format](//github.com/rodrimati1992/const_format_crates/) * [const\_format\_proc\_macros](//github.com/rodrimati1992/const_format_crates/) * [nanorand](//github.com/Absolucy/nanorand-rs) * [slotmap](//github.com/orlp/slotmap) 📋 Original Zlib License ``` The zlib/libpng License Copyright (c) This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. ``` ### MPL-2.0 License[​](#mpl-20-license "Direct link to MPL-2.0 License") 🔍 3rd party code licensed with MPL-2.0 * [colored](//github.com/mackwic/colored) * [indent](//git.sr.ht/~ilkecan/indent-rs) * [option-ext](//github.com/soc/option-ext.git) * [webpki-roots](//github.com/rustls/webpki-roots) 📋 Original MPL-2.0 License See [Mozilla's official page](//www.mozilla.org/en-US/MPL/2.0/) for information on this license. ### Boost Software License[​](#boost-software-license "Direct link to Boost Software License") 🔍 3rd party code licensed with BSL-1.0 * [xxhash-rust](//github.com/DoumanAsh/xxhash-rust) 📋 Original Boost Software License Boost Software License - Version 1.0 - August 17th, 2003 Permission is hereby granted, free of charge, to any person or organization obtaining a copy of the software and accompanying documentation covered by this license (the "Software") to use, reproduce, display, distribute, execute, and transmit the Software, and to prepare derivative works of the Software, and to permit third-parties to whom the Software is furnished to do so, all subject to the following: The copyright notices in the Software and this entire statement, including the above license grant, this restriction and the following disclaimer, must be included in all copies of the Software, in whole or in part, and all derivative works of the Software, unless such copies or derivative works are solely in the form of machine-executable object code generated by a source language processor. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ### CC-1.0 License[​](#cc-10-license "Direct link to CC-1.0 License") 🔍 3rd party code licensed with CC0-1.0 * [hexf-parse](//github.com/lifthrasiir/hexf) * [notify](//github.com/notify-rs/notify) * [tiny-keccak](//github.com/null) * [to\_method](//github.com/whentze/to_method) 📋 Original CC0-1.0 License ``` Creative Commons Legal Code Editing CC0 1.0 Universal Official translations of this legal tool are available CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER. Statement of Purpose The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an "owner") of an original work of authorship and/or a database (each, a "Work"). Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works ("Commons") that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others. For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the "Affirmer"), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights. 1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights ("Copyright and Related Rights"). Copyright and Related Rights include, but are not limited to, the following: i.the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work; ii. moral rights retained by the original author(s) and/or performer(s); iii.publicity and privacy rights pertaining to a person's image or likeness depicted in a Work; iv.rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below; v.rights protecting the extraction, dissemination, use and reuse of data in a Work; vi.database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and vii.other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof. 2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer's Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer's heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer's express Statement of Purpose. 3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer's express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer's Copyright and Related Rights in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "License"). The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not (i) exercise any of his or her remaining Copyright and Related Rights in the Work or (ii) assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer's express Statement of Purpose. 4. Limitations and Disclaimers. a.No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document. b.Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law. c.Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person's Copyright and Related Rights in the Work. Further, Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work. d.Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work. ``` ### LGPL-2.1+ License[​](#lgpl-21-license "Direct link to LGPL-2.1+ License") 🔍 3rd Party Software Dynamically Linked under the LGPL-2.1+ The following software is dynamically linked to MetriCal under the LGPL-2.1+. * [CHOLMOD](//github.com/DrTimothyAldenDavis/SuiteSparse/) * [FFmpeg](//ffmpeg.org/) 📋 Original LGPL-2.1+ License Text ``` CHOLMOD/Check Module. Copyright (C) 2005-2023, Timothy A. Davis CHOLMOD is also available under other licenses; contact authors for details. http://suitesparse.com CHOLMOD/Cholesky module, Copyright (C) 2005-2023, Timothy A. Davis. CHOLMOD is also available under other licenses; contact authors for details. http://suitesparse.com This Module is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This Module is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this Module; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA ``` ### GPL with Section 7 GCC Exception[​](#gpl-with-section-7-gcc-exception "Direct link to GPL with Section 7 GCC Exception") 🔍 3rd party libraries dynamically linked with the GPL with Section 7 GCC Exception * `libgomp1` * `libstdc++v3` * `libgcc_s` * `libc` * `libm` These libraries were built and linked to the final version of MetriCal incidentally through the use of GCC for some of our dependencies (CHOLMOD, OpenCV). 📋 Original GPL with Section 7 GCC Exception License ``` This is the Debian GNU/Linux prepackaged version of the GNU compiler collection, containing Ada, C, C++, D, Fortran 95, Go, Objective-C, Objective-C++, and Modula-2 compilers, documentation, and support libraries. In addition, Debian provides the gm2 compiler, either in the same source package, or built from a separate same source package. Packaging is done by the Debian GCC Maintainers , with sources obtained from: ftp://gcc.gnu.org/pub/gcc/releases/ (for full releases) svn://gcc.gnu.org/svn/gcc/ (for prereleases) ftp://sourceware.org/pub/newlib/ (for newlib) git://git.savannah.gnu.org/gm2.git (for Modula-2) The current gcc-12 source package is taken from the git gcc-12-branch. Changes: See changelog.Debian.gz Debian splits the GNU Compiler Collection into packages for each language, library, and documentation as follows: Language Compiler package Library package Documentation --------------------------------------------------------------------------- Ada gnat-12 libgnat-12 gnat-12-doc C gcc-12 gcc-12-doc C++ g++-12 libstdc++6 libstdc++6-12-doc D gdc-12 Fortran 95 gfortran-12 libgfortran5 gfortran-12-doc Go gccgo-12 libgo0 Objective C gobjc-12 libobjc4 Objective C++ gobjc++-12 Modula-2 gm2-12 libgm2 For some language run-time libraries, Debian provides source files, development files, debugging symbols and libraries containing position- independent code in separate packages: Language Sources Development Debugging Position-Independent ------------------------------------------------------------------------------ C++ libstdc++6-12-dbg libstdc++6-12-pic D libphobos-12-dev Additional packages include: All languages: libgcc1, libgcc2, libgcc4 GCC intrinsics (platform-dependent) gcc-12-base Base files common to all compilers gcc-12-soft-float Software floating point (ARM only) gcc-12-source The sources with patches Ada: libgnat-util12-dev, libgnat-util12 GNAT version library C: cpp-12, cpp-12-doc GNU C Preprocessor libssp0-dev, libssp0 GCC stack smashing protection library libquadmath0 Math routines for the __float128 type fixincludes Fix non-ANSI header files C, C++ and Fortran 95: libgomp1-dev, libgomp1 GCC OpenMP (GOMP) support library libitm1-dev, libitm1 GNU Transactional Memory Library Biarch support: On some 64-bit platforms which can also run 32-bit code, Debian provides additional packages containing 32-bit versions of some libraries. These packages have names beginning with 'lib32' instead of 'lib', for example lib32stdc++6. Similarly, on some 32-bit platforms which can also run 64-bit code, Debian provides additional packages with names beginning with 'lib64' instead of 'lib'. These packages contain 64-bit versions of the libraries. (At this time, not all platforms and not all libraries support biarch.) The license terms for these lib32 or lib64 packages are identical to the ones for the lib packages. COPYRIGHT STATEMENTS AND LICENSING TERMS GCC is Copyright (C) 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019 Free Software Foundation, Inc. GCC is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. GCC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Files that have exception clauses are licensed under the terms of the GNU General Public License; either version 3, or (at your option) any later version. On Debian GNU/Linux systems, the complete text of the GNU General Public License is in `/usr/share/common-licenses/GPL', version 3 of this license in `/usr/share/common-licenses/GPL-3'. The following runtime libraries are licensed under the terms of the GNU General Public License (v3 or later) with version 3.1 of the GCC Runtime Library Exception (included in this file): - libgcc (libgcc/, gcc/libgcc2.[ch], gcc/unwind*, gcc/gthr*, gcc/coretypes.h, gcc/crtstuff.c, gcc/defaults.h, gcc/dwarf2.h, gcc/emults.c, gcc/gbl-ctors.h, gcc/gcov-io.h, gcc/libgcov.c, gcc/tsystem.h, gcc/typeclass.h). - libatomic - libdecnumber - libgomp - libitm - libssp - libstdc++-v3 - libobjc - libgfortran - The libgnat-12 Ada support library and libgnat-util12 library. - Various config files in gcc/config/ used in runtime libraries. - libvtv The libbacktrace library is licensed under the following terms: Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. (2) Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (3) The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The libsanitizer libraries (libasan, liblsan, libtsan, libubsan) are licensed under the following terms: Copyright (c) 2009-2019 by the LLVM contributors. All rights reserved. Developed by: LLVM Team University of Illinois at Urbana-Champaign http://llvm.org Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal with the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimers. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimers in the documentation and/or other materials provided with the distribution. * Neither the names of the LLVM Team, University of Illinois at Urbana-Champaign, nor the names of its contributors may be used to endorse or promote products derived from this Software without specific prior written permission. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE SOFTWARE. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The libffi library is licensed under the following terms: libffi - Copyright (c) 1996-2003 Red Hat, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ``Software''), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The documentation is licensed under the GNU Free Documentation License (v1.2). On Debian GNU/Linux systems, the complete text of this license is in `/usr/share/common-licenses/GFDL-1.2'. GCC RUNTIME LIBRARY EXCEPTION Version 3.1, 31 March 2009 Copyright (C) 2009 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This GCC Runtime Library Exception ("Exception") is an additional permission under section 7 of the GNU General Public License, version 3 ("GPLv3"). It applies to a given file (the "Runtime Library") that bears a notice placed by the copyright holder of the file stating that the file is governed by GPLv3 along with this Exception. When you use GCC to compile a program, GCC may combine portions of certain GCC header files and runtime libraries with the compiled program. The purpose of this Exception is to allow compilation of non-GPL (including proprietary) programs to use, in this way, the header files and runtime libraries covered by this Exception. 0. Definitions. A file is an "Independent Module" if it either requires the Runtime Library for execution after a Compilation Process, or makes use of an interface provided by the Runtime Library, but is not otherwise based on the Runtime Library. "GCC" means a version of the GNU Compiler Collection, with or without modifications, governed by version 3 (or a specified later version) of the GNU General Public License (GPL) with the option of using any subsequent versions published by the FSF. "GPL-compatible Software" is software whose conditions of propagation, modification and use would permit combination with GCC in accord with the license of GCC. "Target Code" refers to output from any compiler for a real or virtual target processor architecture, in executable form or suitable for input to an assembler, loader, linker and/or execution phase. Notwithstanding that, Target Code does not include data in any format that is used as a compiler intermediate representation, or used for producing a compiler intermediate representation. The "Compilation Process" transforms code entirely represented in non-intermediate languages designed for human-written code, and/or in Java Virtual Machine byte code, into Target Code. Thus, for example, use of source code generators and preprocessors need not be considered part of the Compilation Process, since the Compilation Process can be understood as starting with the output of the generators or preprocessors. A Compilation Process is "Eligible" if it is done using GCC, alone or with other GPL-compatible software, or if it is done without using any work based on GCC. For example, using non-GPL-compatible Software to optimize any GCC intermediate representations would not qualify as an Eligible Compilation Process. 1. Grant of Additional Permission. You have permission to propagate a work of Target Code formed by combining the Runtime Library with Independent Modules, even if such propagation would otherwise violate the terms of GPLv3, provided that all Target Code was generated by Eligible Compilation Processes. You may then convey such a combination under terms of your choice, consistent with the licensing of the Independent Modules. 2. No Weakening of GCC Copyleft. The availability of this Exception does not imply any general presumption that third-party software is unaffected by the copyleft requirements of the license of GCC. libquadmath/*.[hc]: Copyright (C) 2010 Free Software Foundation, Inc. Written by Francois-Xavier Coudert Written by Tobias Burnus This file is part of the libiberty library. Libiberty is free software; you can redistribute it and/or modify it under the terms of the GNU Library General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Libiberty is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Library General Public License for more details. libquadmath/math: atanq.c, expm1q.c, j0q.c, j1q.c, log1pq.c, logq.c: Copyright 2001 by Stephen L. Moshier This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. coshq.c, erfq.c, jnq.c, lgammaq.c, powq.c, roundq.c: Changes for 128-bit __float128 are Copyright (C) 2001 Stephen L. Moshier and are incorporated herein by permission of the author. The author reserves the right to distribute this material elsewhere under different copying permissions. These modifications are distributed here under the following terms: This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. ldexpq.c: * Conversion to long double by Ulrich Drepper, * Cygnus Support, drepper@cygnus.com. cosq_kernel.c, expq.c, sincos_table.c, sincosq.c, sincosq_kernel.c, sinq_kernel.c, truncq.c: Copyright (C) 1997, 1999 Free Software Foundation, Inc. The GNU C Library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. The GNU C Library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. isinfq.c: * Written by J.T. Conklin . * Change for long double by Jakub Jelinek * Public domain. llroundq.c, lroundq.c, tgammaq.c: Copyright (C) 1997, 1999, 2002, 2004 Free Software Foundation, Inc. This file is part of the GNU C Library. Contributed by Ulrich Drepper , 1997 and Jakub Jelinek , 1999. The GNU C Library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. The GNU C Library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. log10q.c: Cephes Math Library Release 2.2: January, 1991 Copyright 1984, 1991 by Stephen L. Moshier Adapted for glibc November, 2001 This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. remaining files: * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. * * Developed at SunPro, a Sun Microsystems, Inc. business. * Permission to use, copy, modify, and distribute this * software is freely granted, provided that this notice * is preserved. ``` ### EPL-2.0 License[​](#epl-20-license "Direct link to EPL-2.0 License") 🔍 3rd party code licensed with EPL-2.0 * [keyed-set](//github.com/p-avital/keyed-set-rs) * [ringbuffer-spsc](//github.com/Mallets/ringbuffer-spsc) * [token-cell](//github.com/p-avital/token-cell-rs) * [validated\_struct](//github.com/p-avital/validated-struct-rs) * [validated\_struct\_macros](//github.com/p-avital/validated-struct-macros-rs) 📋 Original EPL-2.0 License ``` Eclipse Public License - v 2.0 THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. 1. DEFINITIONS "Contribution" means: a) in the case of the initial Contributor, the initial content Distributed under this Agreement, and b) in the case of each subsequent Contributor: i) changes to the Program, and ii) additions to the Program; where such changes and/or additions to the Program originate from and are Distributed by that particular Contributor. A Contribution "originates" from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include changes or additions to the Program which: (i) are separable from the Program, and (ii) are not derivative works of the Program. "Contributor" means any person or entity that Distributes the Program. "Licensed Patents" means patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. "Program" means the Contributions Distributed in accordance with this Agreement. "Recipient" means anyone who receives the Program under this Agreement or any Secondary License (as applicable), including Contributors. "Derivative Works" shall mean any work, whether in Source Code or other form, that is based upon (or derived from) the Program and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. "Modified Works" shall mean any work in Source Code or other form that results from an addition to, deletion from, or modification of the contents of the Program, including, for purposes of clarity, any new file in Source Code form that contains any contents of the Program. Secondary licenses, if any, are identified in the "Secondary License List" appendix to this license. 2. GRANT OF RIGHTS a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, Distribute and sublicense the Program and such Derivative Works in Source Code and Object Code form. b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Program or portions thereof, subject only to those patent claims that are necessarily infringed by the Program or portions thereof alone and not by combination with other components. c) If a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. For avoidance of doubt, this indemnification obligation only applies where the Commercial Contributor has knowledge of the IP infringement claim or if a court of competent jurisdiction has found that the Commercial Contributor has infringed intellectual property. 3. REQUIREMENTS a) If a Contributor Distributes the Program in any form, then: i) the Program must also be made available as Source Code, in accordance with section 3(a)(ii), and the Contributor must accompany the Program with a statement that the Source Code for the Program is available under this Agreement, and informs Recipients how to obtain it in a reasonable manner on or through a medium customarily used for software exchange; and ii) the Contributor may Distribute the Program under a license different than this Agreement, provided that such license: A) effectively disclaims on behalf of all other Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; B) effectively excludes on behalf of all other Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; C) does not attempt to limit or alter the recipients' rights in the Source Code under section 3(a)(ii); and D) requires any subsequent distribution of the Program by any party to be under a license that satisfies the requirements of this section 3(a). b) Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution. 4. COMMERCIAL DISTRIBUTION Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. For avoidance of doubt, this indemnification obligation only applies where the Commercial Contributor has knowledge of the IP infringement claim or if a court of competent jurisdiction has found that the Commercial Contributor has infringed intellectual property. 5. NO WARRANTY EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. 6. DISCLAIMER OF LIABILITY EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT PERMITTED BY APPLICABLE LAW, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 7. GENERAL If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility for serving as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be Distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to Distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. Nothing in this Agreement is intended to be enforceable by any entity that is not a Contributor or Recipient. No third-party beneficiary rights are created by this Agreement. 8. MISCELLANEOUS This Agreement shall be governed by and construed in accordance with the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation. ``` ### Non-standard / or Multi-Licensed Licenses[​](#non-standard--or-multi-licensed-licenses "Direct link to Non-standard / or Multi-Licensed Licenses") Some packages contain software components that are multi-licensed, or are otherwise specially licensed. These are listed here. 📋 Ring crate The [ring](//github.com/briansmith/ring/blob/main/LICENSE) crate is used as a transient dependency in several packages above. The license for this code falls under several different agreements: ``` *ring* uses an "ISC" license, like BoringSSL used to use, for new code files. See LICENSE-other-bits for the text of that license. See LICENSE-BoringSSL for code that was sourced from BoringSSL under the Apache 2.0 license. Some code that was sourced from BoringSSL under the ISC license. In each case, the license info is at the top of the file. See src/polyfill/once_cell/LICENSE-APACHE and src/polyfill/once_cell/LICENSE-MIT for the license to code that was sourced from the once_cell project. ====ISC License==== Copyright 2015-2025 Brian Smith. Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ====LICENSE-other-bits==== Copyright 2015-2025 Brian Smith. Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ====LICENSE-BoringSSL==== Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ## Licenses for support code Parts of the TLS test suite are under the Go license. This code is not included in BoringSSL (i.e. libcrypto and libssl) when compiled, however, so distributing code linked against BoringSSL does not trigger this license: Copyright (c) 2009 The Go Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. BoringSSL uses the Chromium test infrastructure to run a continuous build, trybots etc. The scripts which manage this, and the script for generating build metadata, are under the Chromium license. Distributing code linked against BoringSSL does not trigger this license. Copyright 2015 The Chromium Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ====src/polyfill/once_cell/LICENSE-APACHE==== Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ====/src/polyfill/once_cell/LICENSE-MIT==== Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHOR OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` --- # Camera Targets MetriCal supports a large variety of traditional camera calibration targets, as well as a few of our own design. You can even use multiple targets at once, without having to do much of anything! Learn how to use multiple targets at once here: [Using Multiple Targets](/metrical/targets/multiple_targets.md). Purchase Targets, or Make Your Own You can now purchase calibration targets directly from our [online store](https://shop.tangramvision.com/). This store holds all the targets necessary to run through our [calibration guides](/metrical/calibration_guides/guide_overview.md). If you are a company or facility that needs a more sophisticated target setup for automation or production line purposes, contact us at . That being said, you can always make your own! Find examples for AprilGrid, Markerboard, and Lidar targets in the [MetriCal Premade Targets repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) on GitLab. * AprilGrid * Markerboard * SquareMarkers * DotMarkers * Checkerboard ## AprilGrid[​](#aprilgrid "Direct link to AprilGrid") AprilGrids are patterned sets of Apriltags. They have contrasting squares in the corner of every tag; this provides feature detection algorithms more information to derive corner locations. [⬇️Download Object Space JSON: AprilGrid](/_object_space_templates/april_grid.json) ![Target: AprilGrid](/assets/images/fiducial_aprilgrid_desc-9d17f65302111bd2d493beaeacbaa8b1.png) | Field | Type | Description | | -------------------- | ------- | -------------------------------------------------------------------------------------------------------------- | | `marker_dictionary` | string | The marker dictionary used on this target. See Supported Marker Dictionaries below. | | `marker_grid_width` | float | Number of AprilTags / markers horizontally on the board | | `marker_grid_height` | float | Number of AprilTags / markers vertically on the board | | `marker_length` | float | The length of one edge of the AprilTags in the board, in meters | | `tag_spacing` | boolean | The space between the tags in fraction of the edge size \[0.0, 1.0] | | `marker_id_offset` | integer | (Optional) Lowest marker ID present in the board. This will offset all expected marker values. Default is `0`. | *** ### Supported Marker Dictionaries[​](#supported-marker-dictionaries "Direct link to Supported Marker Dictionaries") | Value | Description | | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | "Apriltag16h5" | 4x4 bit Apriltag containing 20 markers. Minimum hamming distance between any two codes is 5. | | "Apriltag25h9" | 5x5 bit Apriltag containing 35 markers. Minimum hamming distance between any two codes is 9. | | "Apriltag36h11" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. | | "ApriltagKalibr" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. Every marker has a border width of 2; this is the only main differentiator between Kalibr and other Apriltag types. | ## Markerboard[​](#markerboard "Direct link to Markerboard") Markerboards are similar to checkerboards, but contain a series of coded markers in the empty spaces of the checkerboard. These codes are most often in the form of April or ArUco tags, which allow for better identification and isolation of features. [⬇️Download Object Space JSON: Markerboard](/_object_space_templates/markerboard.json) ![Target: Markerboard](/assets/images/fiducial_markerboard_desc-be1fe2b33be51d7fff2275101b313347.png) | Field | Type | Description | | ------------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `checker_length` | float | The length of the side of a checker (a solid square), in meters | | `corner_height` | float | The number of inner corners on the board, counted vertically. This is one less than the number of columns on the board | | `corner_width` | float | The number of inner corners on the board, counted horizontally. This is one less than the number of rows on the board | | `marker_dictionary` | string | The marker dictionary used on this target. See Supported Marker Dictionaries below. | | `marker_length` | float | The length of the side of a marker, in meters | | `marker_id_offset` | integer | (Optional) Lowest marker ID present in the board. This will offset all expected marker values. Default is `0`. | | `initial_corner` | string | (Optional) Valid values are "square" and "marker". Default is "marker". Used to counteract a [breaking change](https://github.com/opencv/opencv/issues/23152) to markerboards generated by newer versions of OpenCV. This option specifies if the origin corner of your board is populated with a solid black square or a marker. In boards generated by OpenCV >=4.6.0, the origin will always be a black square. In older versions, the origin can sometimes be a marker. | *** ### Supported Marker Dictionaries[​](#supported-marker-dictionaries "Direct link to Supported Marker Dictionaries") | Value | Description | | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | "Aruco4x4\_50" | 4x4 bit Aruco containing 50 markers. | | "Aruco4x4\_100" | 4x4 bit Aruco containing 100 markers. | | "Aruco4x4\_250" | 4x4 bit Aruco containing 250 markers. | | "Aruco4x4\_1000" | 4x4 bit Aruco containing 1000 markers. | | "Aruco5x5\_50" | 5x5 bit Aruco containing 50 markers. | | "Aruco5x5\_100" | 5x5 bit Aruco containing 100 markers. | | "Aruco5x5\_250" | 5x5 bit Aruco containing 250 markers. | | "Aruco5x5\_1000" | 5x5 bit Aruco containing 1000 markers. | | "Aruco6x6\_50" | 6x6 bit Aruco containing 50 markers. | | "Aruco6x6\_100" | 6x6 bit Aruco containing 100 markers. | | "Aruco6x6\_250" | 6x6 bit Aruco containing 250 markers. | | "Aruco6x6\_1000" | 6x6 bit Aruco containing 1000 markers. | | "Aruco7x7\_50" | 7x7 bit Aruco containing 50 markers. | | "Aruco7x7\_100" | 7x7 bit Aruco containing 100 markers. | | "Aruco7x7\_250" | 7x7 bit Aruco containing 250 markers. | | "Aruco7x7\_1000" | 7x7 bit Aruco containing 1000 markers. | | "ArucoOriginal" | 5x5 bit Aruco containing the original generated marker library. | | "Apriltag16h5" | 4x4 bit Apriltag containing 20 markers. Minimum hamming distance between any two codes is 5. | | "Apriltag25h9" | 5x5 bit Apriltag containing 35 markers. Minimum hamming distance between any two codes is 9. | | "Apriltag36h10" | 6x6 bit Apriltag containing 2320 markers. Minimum hamming distance between any two codes is 10. | | "Apriltag36h11" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. | | "ApriltagKalibr" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. Every marker has a border width of 2; this is the only main differentiator between Kalibr and other Apriltag types. | ## SquareMarkers[​](#squaremarkers "Direct link to SquareMarkers") "SquareMarkers" is a general catch-all for a collection of signalized markers, e.g. a calibration space made up of many unconnected ArUco or April tags. ![Target: Markers](/assets/images/fiducial_markers_desc-6b4aff646787fa83efa4b5202a5998a0.png) | Field | Type | Description | | ------------------- | ---------------- | ----------------------------------------------------------------------------------- | | `marker_dictionary` | string | The marker dictionary used on this target. See Supported Marker Dictionaries below. | | `marker_ids` | list of integers | The marker IDs present in this set of SquareMarkers. | *** ### Supported Marker Dictionaries[​](#supported-marker-dictionaries "Direct link to Supported Marker Dictionaries") | Value | Description | | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | "Aruco4x4\_50" | 4x4 bit Aruco containing 50 markers. | | "Aruco4x4\_100" | 4x4 bit Aruco containing 100 markers. | | "Aruco4x4\_250" | 4x4 bit Aruco containing 250 markers. | | "Aruco4x4\_1000" | 4x4 bit Aruco containing 1000 markers. | | "Aruco5x5\_50" | 5x5 bit Aruco containing 50 markers. | | "Aruco5x5\_100" | 5x5 bit Aruco containing 100 markers. | | "Aruco5x5\_250" | 5x5 bit Aruco containing 250 markers. | | "Aruco5x5\_1000" | 5x5 bit Aruco containing 1000 markers. | | "Aruco6x6\_50" | 6x6 bit Aruco containing 50 markers. | | "Aruco6x6\_100" | 6x6 bit Aruco containing 100 markers. | | "Aruco6x6\_250" | 6x6 bit Aruco containing 250 markers. | | "Aruco6x6\_1000" | 6x6 bit Aruco containing 1000 markers. | | "Aruco7x7\_50" | 7x7 bit Aruco containing 50 markers. | | "Aruco7x7\_100" | 7x7 bit Aruco containing 100 markers. | | "Aruco7x7\_250" | 7x7 bit Aruco containing 250 markers. | | "Aruco7x7\_1000" | 7x7 bit Aruco containing 1000 markers. | | "ArucoOriginal" | 5x5 bit Aruco containing the original generated marker library. | | "Apriltag16h5" | 4x4 bit Apriltag containing 20 markers. Minimum hamming distance between any two codes is 5. | | "Apriltag25h9" | 5x5 bit Apriltag containing 35 markers. Minimum hamming distance between any two codes is 9. | | "Apriltag36h10" | 6x6 bit Apriltag containing 2320 markers. Minimum hamming distance between any two codes is 10. | | "Apriltag36h11" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. | | "ApriltagKalibr" | 6x6 bit Apriltag containing 587 markers. Minimum hamming distance between any two codes is 11. Every marker has a border width of 2; this is the only main differentiator between Kalibr and other Apriltag types. | ## DotMarkers[​](#dotmarkers "Direct link to DotMarkers") Under Development This target type is still under development. Please check back later for more information. DotMarkers are the same as SquareMarkers, but all of the black and white squares within the tags are replaced with circles! In addition to being a fun name, DotMarkers allow MetriCal to use circles while also preserving the code information of Aruco or April tags. [⬇️Download Object Space JSON: DotMarkers](/_object_space_templates/dot_markers.json) ![Target: DotMarker](/assets/images/fiducial_dotmarker-b03e82d9b11bdfeca55c21bddfb942c8.png) | Field | Type | Description | | ------------------- | ---------------- | ----------------------------------------------------------------------------------- | | `marker_dictionary` | string | The marker dictionary used on this target. See Supported Marker Dictionaries below. | | `marker_ids` | list of integers | The marker IDs present in this set of SquareMarkers. | *** ### Supported Marker Dictionaries[​](#supported-marker-dictionaries "Direct link to Supported Marker Dictionaries") | Value | Description | | ---------------- | --------------------------------------------------------------- | | "Aruco4x4\_50" | 4x4 bit Aruco containing 50 markers. | | "Aruco4x4\_100" | 4x4 bit Aruco containing 100 markers. | | "Aruco4x4\_250" | 4x4 bit Aruco containing 250 markers. | | "Aruco4x4\_1000" | 4x4 bit Aruco containing 1000 markers. | | "Aruco5x5\_50" | 5x5 bit Aruco containing 50 markers. | | "Aruco5x5\_100" | 5x5 bit Aruco containing 100 markers. | | "Aruco5x5\_250" | 5x5 bit Aruco containing 250 markers. | | "Aruco5x5\_1000" | 5x5 bit Aruco containing 1000 markers. | | "Aruco6x6\_50" | 6x6 bit Aruco containing 50 markers. | | "Aruco6x6\_100" | 6x6 bit Aruco containing 100 markers. | | "Aruco6x6\_250" | 6x6 bit Aruco containing 250 markers. | | "Aruco6x6\_1000" | 6x6 bit Aruco containing 1000 markers. | | "Aruco7x7\_50" | 7x7 bit Aruco containing 50 markers. | | "Aruco7x7\_100" | 7x7 bit Aruco containing 100 markers. | | "Aruco7x7\_250" | 7x7 bit Aruco containing 250 markers. | | "Aruco7x7\_1000" | 7x7 bit Aruco containing 1000 markers. | | "ArucoOriginal" | 5x5 bit Aruco containing the original generated marker library. | ## Checkerboard[​](#checkerboard "Direct link to Checkerboard") Limitations of Checkerboards While MetriCal supports checkerboards, it is important to note some limitations: * Points on the checkerboard are ambiguous. No calibration system can reliably tell the difference between a checkerboard rotated 180° and one that is not rotated at all. The same applies between rotations of 90° and 270°. This ambiguity means that MetriCal cannot reliably differentiate extrinsics, which causes [projective compensations](/metrical/core_concepts/projective_compensation.md). * The entire checkerboard needs to be visible in the field-of-view of the camera. With coded targets or asymmetric patterns, MetriCal can still identify key features without the full target in view. We recommend using coded detectors like the Markerboard whenever possible. This allows MetriCal to be more flexible to different data collection practices, and reduces the burden on you to keep the entire object space in frame at all times. Anyone who has ever tried their hand at calibration is familiar with the checkerboard. This is a flat, contrasting pattern of squares with known dimensionality. It's known for its ease of creation and flexibility in use. [⬇️Download Object Space JSON: Checkerboard](/_object_space_templates/checkerboard.json) ![Target: Checkerboard](/assets/images/fiducial_checkerboard_desc-be489e805ee883d971d5df784d01145d.png) | Field | Type | Description | | ---------------- | ----- | ---------------------------------------------------------------------------------------------------------------------- | | `checker_length` | float | The length of the side of a checker (a solid square), in meters | | `corner_height` | float | The number of inner corners on the board, counted vertically. This is one less than the number of columns on the board | | `corner_width` | float | The number of inner corners on the board, counted horizontally. This is one less than the number of rows on the board | --- # Combining Modalities MetriCal's real power comes from its ability to calibrate multiple sensor modalities at once. This is made possible by combining different target types for different sensor types, all within the same calibration session. The target combination most commonly used is a Camera-LiDAR MultiTarget, which combines a markerboard for camera detection with a retroreflective circle for LiDAR detection. This target type allows simultaneous calibration of cameras and LiDAR sensors. Just like individual camera and LiDAR targets, multiple Camera-LiDAR MultiTargets can be used together in a single calibration session to improve coverage and accuracy. See the documentation on [using multiple targets](/metrical/targets/multiple_targets.md) for more information. ## Camera-LiDAR MultiTarget[​](#camera-lidar-multitarget "Direct link to Camera-LiDAR MultiTarget") This target is a markerboard or aprilgrid with a retroreflective ring on its surface. This target type is required to perform a Camera ↔ LiDAR calibration with MetriCal. This hybrid target is represented in object space as two separate object spaces: one for the camera target and one for the circle. The description below is only for the circle. See the documentation on using [mutual construction groups](/metrical/targets/multiple_targets.md#mutual-construction-groups) for more information. Using Multiple Circle Targets When using multiple circle targets, MetriCal requires that the radii of the circles be at least **10cm different** from each other. If the radii are too similar, the calibration may fail or produce incorrect results. [⬇️Download Object Space JSON: Camera-LiDAR MultiTarget](/_object_space_templates/camera_lidar_multitarget.json) ![Target: Camera-LiDAR MultiTarget](/assets/images/camera_lidar_target-a812f9903da313cb8b25a0bcc240e47f.png) ## Measuring the Circle Offsets[​](#measuring-the-circle-offsets "Direct link to Measuring the Circle Offsets") There's no requirement for the circle to be perfectly centered on the camera target. However, you will need to measure the offsets of the circle center relative to the camera target origin in order to use this target type. We'll put this information into the mutual construction group definition for the two object spaces. Here's an example of how that looks in JSON: ``` "mutual_construction_groups": [ { "24e6df7b-b756-4b9c-a719-660d45d796bf": "parent", // Markerboard object space "d66d5ad4-b0e9-11f0-91a9-2b83bda4ed9c": // Circle object space { "parent_from_object": { "rotation": [ // No rotation 0, 0, 0, 1 ], "translation": [ 0.375, // X offset in meters 0.375, // Y offset in meters 0 // Z offset in meters ] } } } ], ... ``` Notice that the offsets are relative to the "origin" of the camera target. The way that this is defined differs across board types, and can be a bit confusing. Premade Targets Are Nice If you are following our [target construction guidelines](/metrical/targets/target_construction.md) and are using a Tangram premade target, we have already calculated offsets from the proper origin and constructed the JSON for you. If you are constructing your own custom circle target, though, you will need to find the origin of your board in order to measure circle offsets yourself. ### AprilGrid + Circle Target Origin[​](#aprilgrid--circle-target-origin "Direct link to AprilGrid + Circle Target Origin") To find the origin of an **AprilGrid** style target, first find the tag matching the `marker_id_offset` of the board. Orient this tag so that it's in the **bottom-left corner of your frame of reference**. The origin of the board is the **top left corner** of the **top left marker** of your board (see diagram). ![Target: Circular AprilGrid Description](/assets/images/fiducial_circle_desc_aprilgrid-b4ad7b1694e1bda235015e236a91d3de.png) ### Markerboard + Circle Target Origin[​](#markerboard--circle-target-origin "Direct link to Markerboard + Circle Target Origin") To find the origin of a **Markerboard** style target, first find the tag matching the `marker_id_offset` of the board. Orient this tag so that it's in the **top-left corner of your frame of reference**. The origin of the board is the **bottom right corner** of the **top left checker** of your board (see diagram). Note that the top left checker of your board may be either a black square or a white checker with a tag inside of it. If it's the latter, the origin is the corner of the checker rather than the tag itself. ![Target: Circle Target Description](/assets/images/fiducial_circle_desc-1a709db6b954b1f4358ee15882eb2fc9.png) --- # LiDAR Targets There's only one style of calibration target that MetriCal supports for LiDAR sensors: the **Circle Target**. Super-DIY We're confident that you can build this one. All you'll need for this target type is some good retroreflective tape (something [like this](https://www.amazon.com/gp/product/B0798MFH1V/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8\&psc=1)) and a circle template. Lucky for you, templates can be found in the [MetriCal Premade Targets repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) on GitLab. ## Circle Target[​](#circle-target "Direct link to Circle Target") This target is just a retroreflective ring! This target type is required to perform any calibration with a LiDAR. If it looks simple, that's because it is. Using Multiple Circle Targets When using multiple circle targets, MetriCal requires that the radii of the circles be at least **10cm different** from each other. If the radii are too similar, the calibration may fail or produce incorrect results. [⬇️Download Object Space JSON: Circle Target](/_object_space_templates/circle.json) ![Target: Circle Target](/assets/images/circle-0929cf2ae715c1023541085c92ee4679.png) | Field | Type | Description | | ------------------------ | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `radius` | float | The radius of the circle, in meters. Measured to the circle's outer edge. | | `detect_interior_points` | boolean | Whether to use the LiDAR points detected within the circle bounds as part of the optimization. Doing so will produce [Interior Points to Plane Metrics](/metrical/results/residual_metrics/interior_points_to_plane_error.md) | | `reflective_tape_width` | float | (Optional) The width of the circular retroreflective tape, in meters. Used as a hint during circle detection. Defaults to 5cm | --- # Using Multiple Targets ## Target Configuration[​](#target-configuration "Direct link to Target Configuration") MetriCal works with multiple calibration targets simultaneously. You don't need a specially measured room or precise target positioning—just a few targets visible to each sensor is sufficient. For most users, our [premade targets](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) combined with the target selection wizard will handle all technical details automatically. This approach provides you with both suitable targets and a correctly formatted object space file. Custom object space configuration is typically only necessary for specialized setups or non-standard calibration requirements. The sections below explain how to configure custom object spaces if your application requires it. ## Constructing the Object Space[​](#constructing-the-object-space "Direct link to Constructing the Object Space") ### Generating UUIDs[​](#generating-uuids "Direct link to Generating UUIDs") Each target should be assigned a different UUID (v4); this is how MetriCal keeps track of them, similar to how it tracks components. If you're writing your own object space file, you can generate a valid UUID using a site like [UUID Generator](https://www.uuidgenerator.net/) or, if you're on Linux, using the `uuid` command: ``` $ sudo apt install uuid $ uuid -v4 b5e4183c-d1ae-11ee-91e7-afd8bef1d15c # <--- a valid UUID ``` ### Format[​](#format "Direct link to Format") Below, we have an example of a JSON object that describes two markerboards, elegantly named `24e6df7b-b756-4b9c-a719-660d45d796bf` and `7324938d-de4e-4d36-a25b-fbd8e6102026`: ``` { "object_spaces": { "24e6df7b-b756-4b9c-a719-660d45d796bf": { "descriptor": { "variances": [0.00004, 0.00004, 0.0004] }, "detector": { "markerboard": { "checker_length": 0.1524, "corner_height": 5, "corner_width": 13, "marker_dictionary": "Aruco4x4_1000", "marker_id_offset": 0, "marker_length": 0.1 } } }, "7324938d-de4e-4d36-a25b-fbd8e6102026": { "descriptor": { "variances": [0.00004, 0.00004, 0.0004] }, "detector": { "markerboard": { "checker_length": 0.1524, "corner_height": 5, "corner_width": 13, "marker_dictionary": "Aruco4x4_1000", "marker_id_offset": 50, "marker_length": 0.1 } } } } } ``` ### Differentiating Targets[​](#differentiating-targets "Direct link to Differentiating Targets") #### Different Dictionaries[​](#different-dictionaries "Direct link to Different Dictionaries") Careful observers will note that each target in the example above has a different `marker_id_offset`. This indicates the lowest tag ID on the target, with the rest of the tags increasing sequentially. There should be no overlap in tag IDs between targets; all tags should be unique. If this assumption is violated, *horrible things will happen*... mainly, your calibration results may make absolutely no sense. This goes for different *dictionaries* as well. Counterintuitively, users have reported misdetections when using aruco markers with different dictionaries, but identical IDs. Bottom line: make sure that **all of your targets have unique IDs**, no matter what dictionary they use. #### Circle Target Radii[​](#circle-target-radii "Direct link to Circle Target Radii") When using multiple [Circle targets](/metrical/targets/lidar_targets.md), MetriCal requires that the radii of the circles be at least **10cm different** from each other. If the radii are too similar, the calibration may fail or produce incorrect results. ### Object Relative Extrinsics (OREs)[​](#object-relative-extrinsics-ores "Direct link to Object Relative Extrinsics (OREs)") Since MetriCal runs a full optimization over all components and object spaces, it naturally derives extrinsics between everything as well. You'll find that the `spatial_constraints` field in the JSON will be populated with the extrinsics between all targets post-calibration. Just like any other data source, more object spaces mean more OREs; more OREs will add time to the optimization. It's just more to solve for! If you're not interested in surveying the extrinsics between object spaces, and are just worried about the component-side calibration, we recommend setting `--disable-ore-inference`: ``` metrical calibrate --disable-ore-inference ... ``` It's important to note that this flag's setting shouldn't dramatically change the component-side calibration. The only meaningful difference is that your results will report spatial constraints for all objects and components. ## Mutual Construction Groups[​](#mutual-construction-groups "Direct link to Mutual Construction Groups") Some targets are designed to be used together. For example, in order to calibrate cameras and lidar simultaneously, we just [combine their targets](/metrical/targets/combining_modalities.md) and then tell MetriCal their spatial relationship. We indicate this by assigning them to the same *mutual construction group*: In a mutual construction group, one target is designated as the "parent," while the other targets are defined relative to this parent. The relative position and orientation of each target in the group are specified using a translation and rotation (Quaternion). This setup allows MetriCal to understand that these targets are part of a single physical entity. For targets that are physically constructed together (such as our LiDAR circle target), the transform is often just a simple offset in the XY plane. In this example, the offset is a translation between the visual target's origin (the top-left corner of the markerboard) and the LiDAR target's origin (the center of the circle). This feature is also used to support *consolidated object spaces* wherein multiple distinct targets with known relative transforms are treated as a single target. This additional information can help MetriCal in challenging scenarios. In that case, the relative transforms are usually non-trivial and determined by a surveying process. See the [`consolidate-object-spaces`](/metrical/commands/calibration/consolidate.md) command and the [narrow field-of-view camera guide](/metrical/calibration_guides/narrow_fov_cal.md) for more details. ``` { // The markerboard and circle targets "object_spaces": { "24e6df7b-b756-4b9c-a719-660d45d796bf": { ... "detector": { "markerboard": { ... } } }, "d66d5ad4-b0e9-11f0-91a9-2b83bda4ed9c": { ... "detector": { "circle": { ... } } } }, // Our mutual construction group, indicating these are one and the same "mutual_construction_groups": [ { "24e6df7b-b756-4b9c-a719-660d45d796bf": "parent", // Markerboard object space "d66d5ad4-b0e9-11f0-91a9-2b83bda4ed9c": // Circle object space { "parent_from_object": { "rotation": [ // No rotation 0, 0, 0, 1 ], "translation": [ 0.375, // X offset in meters 0.375, // Y offset in meters 0 // Z offset in meters ] } } } ], } ``` --- # Target Construction Guide This guide consolidates information on constructing calibration targets for use with MetriCal, focusing specifically on target construction regardless of sensor modality or purpose. Tangram Vision Target Repository Access Tangram Vision's [target repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) for easy-to-construct target templates. ## Target Selection and Generation[​](#target-selection-and-generation "Direct link to Target Selection and Generation") ### Using Pre-made Targets (Recommended)[​](#using-pre-made-targets-recommended "Direct link to Using Pre-made Targets (Recommended)") When using MetriCal, we generally recommend choosing from the premade targets in our [target repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets). Though it is possible to design your own targets with other tools, or repurpose targets that you may have used with other calibration software, using prebuilt targets helps prevent many common issues. All targets in the premade target repository come with the object space file that was used to generate them, so they're guaranteed to be accurate. Additionally, the target repository includes a target selection wizard that helps you automatically select targets that are valid in combination with one another. #### Target Selection Wizard[​](#target-selection-wizard "Direct link to Target Selection Wizard") To use the target selection wizard: 1. You'll need a system with Python 3 installed 2. Download the [target repository](https://gitlab.com/tangram-vision/platform/metrical_premade_targets) 3. Run `python3 target_selection_wizard.py ` 4. Follow the onscreen instructions 5. When complete, the wizard will output PDFs of all chosen targets to the chosen output directory, as well as an object space json file that can be used directly with MetriCal ### Target Type Considerations[​](#target-type-considerations "Direct link to Target Type Considerations") When selecting target types, consider these factors: 1. **Target Type**: * **AprilGrid-style targets** have higher feature density than Markerboards, which improves calibration quality. However, our AprilGrid detector is less robust than the Markerboard one, and may be less consistent in certain scenarios. We generally recommend trying AprilGrids first and then switching to Markerboard if your detections look sporadic. * **Markerboard (ChArUco)** targets might be preferred if you need the larger tag ID dictionary or to fit into an existing calibration pipeline 2. **Target Size**: * Choose the largest target size that you can practically use * Bigger targets are generally more advantageous for calibration quality * They can be seen from farther away and by more cameras at once * Once the target is big enough to exceed the FOV of your camera at the desired capture distance, there is much less advantage to increasing size further 3. **Marker Density**: * Higher marker density creates more features for calibration but risks detection failures at higher distances or lower resolutions * "Standard" density targets are a good default option * "Sparse" density may be appropriate if using targets very far from your camera or with low-resolution cameras * "Dense" density is suitable for close-range, high-resolution captures * It's better to err on the side of more sparse - getting any detections is more important than maximizing the number of detections * Aim for markers to always appear larger than 20px in your captured images 4. **Lidar Compatibility**: * Lidar targets require a retroreflective ring that makes them detectable * If using multiple lidar targets, the radius of each retroreflective circle must differ by at least 10cm * You can mix lidar and non-lidar targets in your setup 5. **Marker ID Offsets**: * Each target needs completely unique marker IDs * The selection wizard handles this automatically * If designing your own targets, you must ensure no ID overlap ## Printing and Assembly[​](#printing-and-assembly "Direct link to Printing and Assembly") ### Printing Specifications[​](#printing-specifications "Direct link to Printing Specifications") 1. Find a print shop that can print targets of your chosen sizes * For US-based customers, [uprinting](https://www.uprinting.com/) has been proven reliable * Recommended printing substrates: * Foam board (lighter, better for carrying during object-motion captures) * Aluminum (more durable and easier to mount) 2. **CRITICAL**: Request that the print shop *center the PDFs rather than scaling them at all* * Maintaining the exact scale of the final printed markers is extremely important for calibration accuracy ### Assembling Lidar Targets[​](#assembling-lidar-targets "Direct link to Assembling Lidar Targets") If you printed lidar targets, you'll need to apply retroreflectors to the printed yellow circle as a separate step: 1. Use any retroreflective tape cut into small segments 2. Apply carefully such that the yellow circle is covered as evenly as possible 3. For premade lidar targets, Tangram offers precut, exactly-sized retroreflective stickers that are easier to apply consistently * Contact for more details ## Verification and Final Steps[​](#verification-and-final-steps "Direct link to Verification and Final Steps") ### Verify Measurements[​](#verify-measurements "Direct link to Verify Measurements") After printing, measure your target to verify that it is correctly scaled: 1. All premade targets have size info printed in the top left corner 2. For AprilGrid targets: * Measure one of the markers * Ensure it matches the `marker_length` field 3. For Markerboards: * Measure one of the black squares * Ensure it matches the `checker_length` ### Target Placement Considerations[​](#target-placement-considerations "Direct link to Target Placement Considerations") When using multiple targets: 1. Ensure their relative transforms remain constant for the duration of the calibration sequence 2. Consider how to practically mount each target in a multi-target scenario 3. Place targets such that there are times when multiple targets can be observed by the same sensor ## Troubleshooting Target Issues[​](#troubleshooting-target-issues "Direct link to Troubleshooting Target Issues") If you encounter the "No Features Detected" error ([cal-calibrate-001](/metrical/commands/command_errors.md)), check the following: * The measurements of your targets should be in meters * The dictionary used for your boards, if applicable (e.g., a 4×4 dictionary has targets made up of 4×4 squares in the inner pattern) * For Markerboards, verify your `initial_corner` is set to the correct variant ('Marker' or 'Square') * Make sure you can see as much of the board as possible when collecting your data * Try adjusting your Camera or LiDAR filter settings to ensure detections are not filtered out ## Custom Target Solutions[​](#custom-target-solutions "Direct link to Custom Target Solutions") If you have unique constraints and feel that the premade targets are insufficient, contact . Tangram Vision has designed and implemented custom target detectors for clients in the past and can help with special requests. --- # Target Overview MetriCal relies on targets to provide metric information that is otherwise difficult to obtain from an unstructured data collection. For instance, a single camera can suffer from scale ambiguity, commonly referred to as the "dollhouse" effect, where the size and distance of objects cannot be accurately determined without known reference points. Calibration targets provide these reference points, allowing MetriCal to accurately estimate both intrinsic and extrinsic parameters of all sensors. ## Purchase MetriCal Targets[​](#purchase-metrical-targets "Direct link to Purchase MetriCal Targets") You can now purchase calibration targets directly from our [online store](https://shop.tangramvision.com/). This store holds all the targets necessary to run through our [calibration guides](/metrical/calibration_guides/guide_overview.md). If you are a company or facility that needs a more sophisticated target setup for automation or production line purposes, contact us at . ## Object Space Configuration Files[​](#object-space-configuration-files "Direct link to Object Space Configuration Files") In MetriCal, targets refer to the physical objects used during calibration. The software then converts that to features or patterns that it can detect in the collected data. These features are represented in MetriCal as [object spaces](/metrical/core_concepts/object_space_overview.md). MetriCal requires an object space file in order to understand the geometry and layout of the targets being used. You'll find pre-defined object space templates for each target type within the target guides. Note that unlike other similar calibration systems, MetriCal can support an arbitrary number of targets in a single calibration procedure without any special configuration. This means you can mix and match different target types to suit your specific calibration needs. You can do this by adding more object space entries to your configuration file. ## Guides[​](#guides "Direct link to Guides") [![Targets for Cameras](/_assets/camera_stand.png)](/metrical/targets/camera_targets.md) ### [Targets for Cameras](/metrical/targets/camera_targets.md) [Information on MetriCal-compatible targets for camera calibration.](/metrical/targets/camera_targets.md) [View Targets →](/metrical/targets/camera_targets.md) [![Targets for LiDAR](/_assets/tangram_lidar.png)](/metrical/targets/lidar_targets.md) ### [Targets for LiDAR](/metrical/targets/lidar_targets.md) [Information on MetriCal-compatible targets for LiDAR calibration.](/metrical/targets/lidar_targets.md) [View Targets →](/metrical/targets/lidar_targets.md) [![Combining Targets for Multi-Modal Calibration](/_assets/combining_boards.png)](/metrical/targets/combining_modalities.md) ### [Combining Targets for Multi-Modal Calibration](/metrical/targets/combining_modalities.md) [Combine target designs for convenient multi-modal calibration.](/metrical/targets/combining_modalities.md) [View Guide →](/metrical/targets/combining_modalities.md) [![Using Multiple Targets in a Single Calibration](/_assets/multiboard.png)](/metrical/targets/multiple_targets.md) ### [Using Multiple Targets in a Single Calibration](/metrical/targets/multiple_targets.md) [Leverage multiple targets at once for enhanced data capture.](/metrical/targets/multiple_targets.md) [View Guide →](/metrical/targets/multiple_targets.md) [![Building Your Own Targets](/_assets/arts_and_crafts.png)](/metrical/targets/target_construction.md) ### [Building Your Own Targets](/metrical/targets/target_construction.md) [Use this handy guide to build a full calibration suite.](/metrical/targets/target_construction.md) [View Guide →](/metrical/targets/target_construction.md) ---