Skip to main content
Version: 9.0

Object Space

An Object Space refers to a known set of features in the environment that is observed in our calibration data. This often takes the form of a fiducial marker, a calibration target, or similar mechanism. This is one of the main user inputs to MetriCal, along with the system's Plex and calibration dataset.

One of the most difficult problems in calibration is that of cross-modality data correlation. A camera is fundamentally 2D, while Lidar is 3D. How do we bridge the gap? By using the right object space! Many seasoned calibrators are familiar with the checkerboards and grids that are used for camera calibration; these are object spaces as well.

Object spaces as used by MetriCal help define the parameters needed to accurately and precisely detect and manipulate features seen across modalities in the environment. This section serves as a reference for the different kinds of object spaces, detectors, and features supported by MetriCal; and perhaps more importantly, how to combine these object spaces.

Diving Into The Unknown(s)

Calibration processes often use external sources of knowledge to learn values that a component (a camera, for instance) couldn't derive on its own. Continuing the example with cameras as our component of reference — there's no sense of scale in a photograph. A photo of a mountain could be larger than life, or it could be a diorama; the camera has no way of knowing, and the image unto itself has no way to communicate the metric scale of what it contains.

This is where object spaces come into play. If we place a target with known metric properties in the image, we now have a reference for metric space.

Target in object space

Most component types require some target field like this for proper calibration. For LiDAR we use the a circular target comprising a checkerboard and some retroreflective tape; Similarly, for cameras MetriCal supports a whole host of different checkerboards and signalized markers. Each of these fiducials is referred to as an object space.

Object Spaces are Optimized

In MetriCal, even object space points have covariance! This reflects the imperfection of real life; even the sturdiest target can warp and bend, which will create uncertainty. We embed this possibility in the covariance value of each object space point. That way, you can have greater certainty of the results, even if your target is not the perfectly "flat" or idealized geometric abstraction that is often assumed in calibration software.

One of the most unique capabilities of MetriCal is the ability to optimize object space. With MetriCal, it is possible to calibrate with boards that are imperfect without inducing projective compensation errors back into your final results.

Multi-Target Calibrations? No Problem.

In addition to MetriCal's ability to optimize the object space, MetriCal can also optimize across multiple object spaces. By specifying multiple targets in a scene, it becomes possible to calibrate complex scenarios that wouldn't otherwise be feasible. For example, MetriCal can optimize an extrinsic between two cameras with zero overlap if multiple object spaces are used.

Object Space Structure

Like our plex structure, object spaces are serialized as JSON objects or files and are passed into MetriCal for many of its different modes.

Object Space Schema

The full JSON schema for object spaces can be found in the MetriCal Sensor Calibration Utilities repository on GitLab.

object_spacesA map of UUID to object space objectsThe collection of object spaces (identified by UUID) to use. This comprises detector/descriptor pairs for each object space.
spatial_constraintsAn array of spatial constraints(Optional) Spatial constraints between multiple object spaces, if applicable. Learn more in "Using Multiple Fiducials"
mutual_construction_groupsA set of object space UUIDs(Optional) Sets of object spaces that exist as a mutual construction. Learn more in "Using Multiple Fiducials"