Skip to content

Object Space

Object space refers to a known point or points that we capture in our calibration data (often in the form of a fiducial marker, a calibration target, or another similar mechanism), in order to help learn certain parameters. This is one of the main user inputs to TVCal, along with the system's Plex and calibration dataset.

Diving Into The Unknown(s)

Calibration processes often use external sources of knowledge to learn values that a component (a camera, for instance) couldn't derive on its own. Take cameras, for instance: there's no sense of scale in a photograph. This mountain could be larger than life, or it could be a diorama; the camera has no way of knowing!

That is, unless we tell the camera something about the scale of the photograph. If we place a target with known metric properties in the image, we now have a reference for metric space.

Target in object space

Every component type requires some target field like this for proper calibration. For LiDAR, it might be a perfectly flat plane; for depth, it might be capturing markers at various distances. In TVCal, we describe all of these calibration assistants as object space.

Object Space Covariance

In TVCal, even object space points have covariance! This reflects the imperfection of real life; even the sturdiest target can warp and bend, which will create uncertainty. We embed this possibility in the covariance value of each object space point. That way, you can have greater certainty of the results, even if your target is damaged.

Components, Detectors, and Descriptors

There are three things to consider when describing object space:

  • Component type: What component type is observing the space (e.g. camera)
  • Detector: What we should look for to determine object space
  • Descriptor: How we should interpret the detector to derive the object space

We covered component types when creating our Plex before, so let's turn our attention to Detectors and Descriptors.