metrical evaluate
Purpose
- Evaluate the quality of a calibration on a test dataset.
- Visualize the calibration as applied to a test dataset.
Usage
metrical evaluate [OPTIONS] <INPUT_DATA_PATH> <PLEX_PATH> <OBJECT_SPACE_PATH>
Description
Evaluate can apply a calibration to a given dataset and produce metrics to validate the quality of the calibration. This is commonly referred to validating calibration on a "test" dataset, rather than using the "training" dataset that produced the calibration in the first place.
An evaluation optimization runs the same adjustment as a metrical calibrate
run, with one crucial
difference: it will not solve for any values in the plex. Instead, it will instead fix these values
and use them to generate metrics.
When evaluating, make sure to use an optimized plex JSON extracted from a previous MetriCal run for
your system. Use jq
to extract this data into its own file for
easy analysis and comparison to the original inputs.
Install jq using apt:
sudo apt install jq
Then use jq to extract the relevant optimized data.
jq .plex results.json > optimized_plex.json
jq .object_space results.json > optimized_obj.json
Arguments
<INPUT_DATA_PATH>
Input data path for this evaluation.
MetriCal accepts a few data formats:
- Ros1 bags, in the form of a
.bag
file. - Ros2 bags, in the form of a
.mcap
file. - Folders, as a flat directory of data. The folder passed to MetriCal should itself hold one folder of data for every component.
In all cases, the topic/folder name must match a named component in the plex in order to be matched correctly. If this is not the case, there's no need to edit the plex directly; instead, one may use the
--topic_to_component
flag.- Ros1 bags, in the form of a
[PLEX_PATH]
The plex.json used as a prior for MetriCal's adjustment.
[OBJECT_SPACE_PATH]
A path pointing to a description of the object space for the adjustment. This should be a JSON file.
Options
Evaluate mode supports all global options, as well as the following:
-d, --disable-filter
Disable data filtering based on scene motion. This is useful for datasets that are a series of snapshots, rather than continuous motion.
-t, --stillness-threshold <STILLNESS_THRESHOLD>
This threshold is used for filtering data based on detected feature motion in the image. An image is considered "still" if the average delta in features between subsequent frames is below this threshold. The units for this threshold are in pixels/frame. [default: 1]
-o, --output-json <OUTPUT_JSON>
The output path to save the final JSON output of the program. [default: results.json]
-T, --topic-to-component <topic_name:component_name>
A mapping of ROS topic/folder names to component names/UUIDs in the input plex.
MetriCal only parses data that has a topic-component mapping. Ideally, topics and components share the same name. However, if this is not the case, use this flag to map topic names from the dataset to component names in the plex.
-v, --render
Whether to visualize the current command using Rerun.
--render-socket <RENDER_SOCKET>
The web socket address on which Rerun is listening.
This should be an IP address and port number separated by a colon, e.g.
--render-socket="127.0.0.1:3030"
. By default, Rerun will listen on sockethost.docker.internal:9876
.When running Rerun from its CLI, the IP would correspond to Rerun's
--bind
option and the port would correspond to its--port
option.
Examples
1. Run an evaluation, and render the calibration process and results
metrical evaluate -v \
--render-results \
$EVAL_DATA $OPTIMIZED_PLEX $OBJSPC