metrical shape
Purpose
- Modify an input plex into any number of different helpful output formats.
Usage
metrical shape <PLEX_OR_RESULTS_PATH> <OUTPUT_DIR> <COMMAND>
Description
For a single-camera system, a plex is a simple affair. For a complicated multi-component system, plexes can become incredibly complex and difficult to parse. This is where Shape comes in.
Shape modifies a plex into a variety of different and useful configurations. It was created with an eye towards the practical use of calibration data in a deployed system. For example: many automotive and mobile robotics applications operate under a hub-and-spoke model for their calibration, where all components are spatially constrained to a single origin component. Shape can take the fully-connected output of a normal calibration and create this hub-and-spoke model using the "focus" command.
Unlike other modes, Shape has commands of its own, each with different arguments. We'll break these down in addition to the usual top-level Arguments and Options.
Arguments
<PLEX_OR_RESULTS_PATH>
The plex.json or results.json used as a starting point for this Shape operation. This allows for a results.json to be used as an input, which is useful for chaining together multiple Shape operations off of a fresh calibration.
<OUTPUT_DIR>
The output directory for this shaping operation.
Options
-v, --verbose
Whether to print the results of the operation to stdout. This is useful for debugging.
Commands
mst
: Minimum Spanning Tree
Create a plex which only features the Minimum Spanning Tree of all spatial components.
As defined by Wikipedia, a minimum spanning tree (MST) "is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight". In other words, it's the set of connections with the smallest weight that still connects all of the components together.
In the case of a plex's spatial constraints, the "weights" for our MST search are the determinants of the covariance of the extrinsic. This is a good measure of how well-constrained a component is to its neighbors.
urdf
: ROS URDF
Create a URDF for ROS from this plex.
Option | Description |
---|---|
--use-optical-frame | The use_optical_frame boolean determines whether or not to create a link for the camera frame in the URDF representation of the Plex. If use_optical_frame is true and the component is a camera, a new link for the camera frame is created and added to the URDF. This link represents the optical frame of the camera in the robot model:base → camera (FLU) → camera_optical (RDF) If use_optical_frame is false , no such link is created. This might be the case if the optical frame is not needed in the URDF representation, or if it's being handled in some other way:base → camera (FLU) |
focus
: Hub-and-Spoke Plex
Create a plex in which all components are spatially connected to only one origin component/object space ID. In other words, one component acts as a "focus", with spatial constraints to all other components.
Option | Description |
---|---|
-a, --component <COMPONENT> | The component to focus this plex on. |
lut
: Camera LUT
Create a pixel-wise lookup table, as described by the calibration parameters of a single camera.
It's not always easy to adapt a new camera model. Sometimes, you just want to apply a correction and not have to worry about getting all of the math right. LUT mode gives you a shortcut: it describes the shift in pixel values required to apply a calibration to an entire image.
Note that a lookup table only describes the correction for an image of the same dimensions as the calibrated image. If you're trying to downsample or upsample an image, you'll need to derive a new lookup table for that image dimension.
These lookup tables can be used as-is using OpenCV's lookup table routines; see this open-source repo on applying lookup tables in OpenCV for an example.
Option | Description |
---|---|
-a, --camera <CAMERA> | The camera to generate a LUT for. Must be a camera component. |
stereo-lut
: Stereo Rectification LUTs
The full JSON schema for stereo rectification can be found in the MetriCal Sensor Calibration Utilities repository on GitLab.
Create two pixel-wise lookup tables to produce a stereo rectified pair based on an existing calibration. Rectification occurs in a hallucinated frame with a translation at the origin of the dominant eye, but with a rotation halfway between the dominant frame and the secondary frame.
Rectification is the process of transforming a stereo pair of images into a common plane. This is useful for a ton of different applications, including feature matching and disparity estimation. Rectification is often a prerequisite for other computer vision tasks.
The Stereo LUT mode will create two lookup tables, one for each camera in the stereo pair. These
LUTs are the same format as the ones created by the lut
command, but they map the pixel-wise shift
needed to move each image into that common plane. The result is a pair of rectified images!
Stereo LUT mode will also output the values necessary to calculate depth from disparity values. These are included in a separate JSON file in the output directory.
Option | Description |
---|---|
-a, --dominant <DOMINANT> | The dominant eye in this stereo pair. Must be a camera component. |
-b, --secondary <SECONDARY> | The secondary eye in this stereo pair. Must be a camera component. |
Examples
1. Create a minimum spanning tree plex
metrical shape <PLEX_OR_RESULTS_PATH> <OUTPUT_DIR> mst
2. Create a URDF from a plex
metrical shape <PLEX_OR_RESULTS_PATH> <OUTPUT_DIR> urdf
3. Create a plex focused around component ir_one
metrical shape <PLEX_OR_RESULTS_PATH> <OUTPUT_DIR> focus --component ir_one
4. Create a correction lookup table for camera ir_one
metrical shape <PLEX_OR_RESULTS_PATH> <OUTPUT_DIR> lut --camera ir_one
5. Create a rectification lookup table between a stereo pair
metrical shape <PLEX_OR_RESULTS_PATH> <OUTPUT_DIR> \
stereo-lut --dominant ir_one --secondary ir_two