IMU Data Capture
Unlike other modalities supported by MetriCal, IMU calibrations do not require any specific object space or target, as these types of components (accelerometers and gyroscopes) measure forces and rotational velocities directly.
What IMUs do require, however, is that they are calibrated alongside one or more cameras. The advice herein assumes that your dataset will be configured to perform a camera ↔ IMU calibration.
Video Tutorial
Practical Example
We've captured an example of a good IMU ↔ Camera calibration dataset that you can use to test out MetriCal. If it's your first time performing an IMU calibration using MetriCal, it might be worth running through this dataset once just so that you can get a sense of what good data capture looks like.
Running the Example Dataset
First, download the dataset here and unzip it somewhere on your computer. Then, copy the following bash script into the dataset directory:
This script assumes that you have either installed the apt version of MetriCal, or that you are
using the docker version with an alias set to metrical
. For more information, please review the
MetriCal installation instructions.
#!/usr/bin/env bash
set -e
DATA=camera_imu_box3.mcap
PLEX=plex.json
RESULTS=results.json
REPORT=report.html
OBJ=object.json
metrical init -y \
-m /rs/color/image_raw:opencv_radtan \
-m /rs/imu:scale_shear_rotation \
$DATA $PLEX
metrical calibrate \
--render \
--camera-motion-threshold 4.0 \
-o $RESULTS \
--report-path $REPORT \
$DATA $PLEX $OBJ
Before running the script, let's take note of a couple things:
- The
metrical init
command uses the-m
flag to describe the system being calibrated. It indicates that the dataset topic/rs/color/image_raw
represents a camera that needsopencv_radtan
intrinsics generated, and that the/rs/imu
topic is an IMU that that needs its scale, shear, and rotation calibrated. The configuration generated bymetrical init
is saved to a file namedplex.json
, which will be used during the calibration step to configure the system. You can learn more about plexes here. metrical calibrate
is being passed a--camera-motion-threshold
value to help deal with some of the intense motion of the dataset. This is described above.
Finally, note the --render
flag being passed to metrical calibrate
. This flag will allow us to
watch the detection phase of the calibration as it happens in realtime. This can have a large impact
on performance, but is invaluable for debugging data quality issues. Rendering has been enabled here
so that you can watch the dataset as it's being processed.
MetriCal depends on Rerun for all of its rendering. As such, you'll need a specific version of Rerun
installed on your machine to use the --render
flag. Please ensure that you've followed the
visualization configuration instructions before running this
script.
You should now be ready to run the script. When you start it, it will display a visualization window like the following with both IMU graphs and detections overlaid on the camera frames:
While the calibration is running, take specific note of the frequency and magnitude of the sensor motion, as well as the still periods following periods of motion. When it comes time to capturing your own data, try to replicate these motion patterns to the best of your ability.
When the script finishes, you'll be left with three artifacts:
plex.json
- as described in the prior section.report.html
- a human-readable summary of the calibration run. Everything in the report is also logged to your console in realtime during the calibration. You can learn more about interpreting the report here.results.json
- a file containing the final calibration and various other metrics. You can learn more about results json files here and about manimulating your results usingshape
commands here.
And that's it! Hopefully this trial run will have given you a better understanding of how to capture your own IMU ↔ Camera calibration.
Data Capture Guidelines
Maximize View of Object Spaces for Duration of Capture
IMU calibrations are structured such that MetriCal jointly solves for the first order gyroscope and accelerometer biases in addition to solving for the relative IMU-from-camera extrinsic. This is done by comparing the world pose (or world extrinsic) of the camera between frames to the expected motion that is measured by the accelerometer and gyroscope of the IMU.
Because of how this problem is posed, the best way to produce consistent, precise IMU ↔ camera calibrations is to maximize the visibility of one or more targets in the object space from one of the cameras being calibrated alongside the IMU. Put in a different way: avoid capturing sections of data where the IMU is recording but where no object space or target can be seen from any camera. Doing so can lead to misleading bias estimates.
Excite All IMU Axes During Capture
IMU calibrations are no different than any other modality in how they are entirely a data-driven process. In particular, the underlying data needs to demonstrate observed translational and rotational motions in order for MetriCal to understand the motion path that the IMU has followed.
This is what is meant by "exciting" an IMU: accelerations and rotational velocities must be observable in the data (different enough from the underlying noise in the measurements) so as to be separately observable from e.g. the biases. This means that when capturing data to calibrate between an IMU and one or more cameras, it is important to move the IMU translationally across all accelerometer axes, and rotationally about all gyroscope axes. This motion can be repetitive so long as a sufficient magnitude of motion has been achieved. It is difficult to describe what that magnitude is as that magnitude is dependent on what kind of IMU is being calibrated (e.g. MEMS IMU, how large it is, what kinds of noise it measures, etc.).
We suggest alternating between periods of "excitement" or motion with the IMU and holding still so that the camera(s) can accurately and precisely measure the given object space.
If you find yourself still having trouble getting a sufficient number of observations to produce
reliable calibrations, we suggest bumping up the threshold for our motion filter heuristic up
when calling metrical calibrate
. This is controlled by the --camera-motion-threshold
flag. A value of 3.0 through 5.0 can sometimes improve the quality of the calibration a
significant amount.
Reduce Motion Blur in Camera Images
This advice holds for both multi-camera and IMU ↔ camera calibrations. It is often advisable to reduce the effects of motion in the images to produce more crisp, detailed images to calibrate against. Some ways to do this are to:
- Always use a global shutter camera
- Reduce the overall exposure time of the camera
- Avoid over-exciting IMU motion, and don't be scared to slow down a little if you find you can't detect the object space much if at all.
MetriCal currently does not perform well with IMU ↔ camera calibrations if the camera is a rolling shutter camera. We advise calibrating with a global shutter camera whenever possible.