Skip to main content
Version: 11.0

Camera Models

Below are all supported camera intrinsics models in MetriCal. If there is a model that you use that is not listed here, just contact us! We're always looking to expand our support.

Common Variables and Definitions

VariablesDescription
xcx_{c}, ycy_{c}Pixel coordinates in the image plane, with origin at the principal point
XX, YY, ZZFeature coordinates in the world, in 3D Euclidean Space
X^\hat X, Y^\hat Y, Z^\hat ZCorrected camera ray, in homogeneous coordinates centered on the camera origin
X^dist\hat X_{dist}, Y^dist\hat Y_{dist}, Z^dist\hat Z_{dist}Distorted camera ray, in homogeneous coordinates centered on the camera origin

Modeling (3D → 2D) refers to projecting a 3D point in the world to a 2D point in the image, given the intrinsics provided. In other words, it models the effect of distortion on a 3D point. This is also known as "forward projection".

Correcting (2D → 3D) refers to the process of finding the camera ray that is created when intrinsics are applied to a 2D point in the image. When "undistorting" a pixel, this can be thought of as finding the corrected ray's point of intersection with the image plane. In other words, it corrects for the effect of distortion. This is also known as "inverse projection".

Unified refers to a model that can be used to both model and correct for distortion.

No Distortion

MetriCal keyword: no_distortion

This model is a classic pinhole projection with no distortion or affine effects. This model is most applicable when you're already correcting your images with a rectification process, or when you're using a camera with a very low distortion profile.

ParameterDescription
fffocal length (px)
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)

De facto, nearly any camera that is already corrected for distortion uses this model. All models on this page are either pinhole models by design, or degrade to a pinhole model when no distortion is present.

xc=fXZ+cxyc=fYZ+cy\begin{align} x_{c} &= f \frac{X}{Z} + c_{x} \\ y_{c} &= f \frac{Y}{Z} + c_{y} \end{align}

OpenCV RadTan

Original Reference

OpenCV.org. Camera Calibration and 3D Reconstruction documentation. OpenCV 4.10-dev. https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a

MetriCal keyword: opencv_radtan

Type: Modeling

This is based on OpenCV's default distortion model, which is a modified Brown-Conrady model. If you've ever used OpenCV, you've most certainly used this.

ParameterDescription
fffocal length (px)
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)
k1k_{1}first radial distortion term
k2k_{2}second radial distortion term
k3k_{3}third radial distortion term
p1p_{1}first tangential distortion term
p2p_{2}second tangential distortion term

Common Use Cases

OpenCV's adoption of this model has made it the de facto starting point for most calibration tasks. However, this does not mean it's multi-purpose. OpenCV RadTan is best suited for cameras with a field of view of 90° or less.

A good number of sensor packages use this model, including:

ModelCameras With Model
Intel RealSense D435All cameras
Intel RealSense D435iAll cameras
Intel RealSense D455All cameras

Modeling

X=X/ZY=Y/Zr2=X2+Y2r=(1+k1r2+k2r4+k3r6)Xdist=rX+2p1XY+p2(r2+2X2)Ydist=rY+2p2XY+p1(r2+2Y2)xc=fXdist+cxyc=fYdist+cy\begin{align} X' &= X / Z \\ Y' &= Y / Z \\ r^2 &= X'^2 + Y'^2 \\ r' &= (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) \\ X_{dist} &= r'X' + 2p_1X'Y' + p_2(r^2 + 2X'^2) \\ Y_{dist} &= r'Y' + 2p_2X'Y' + p_1(r^2 + 2Y'^2) \\ x_{c} &= f X_{dist} + c_{x} \\ y_{c} &= f Y_{dist} + c_{y} \\ \end{align}

Correcting

Correcting for OpenCV RadTan is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal.

OpenCV Fisheye

Original Reference

OpenCV.org. Fisheye camera model documentation. OpenCV 4.10-dev. https://docs.opencv.org/3.4/db/d58/group__calib3d__fisheye.html

MetriCal keyword: opencv_fisheye

Type: Modeling

This model is based on OpenCV's Fisheye lens model, which is a modified Kannala-Brandt model. It has no tangential distortion terms, but is robust to wide-angle lens distortion.

ParameterDescription
fffocal length (px)
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)
k1k_{1}first radial distortion term
k2k_{2}second radial distortion term
k3k_{3}third radial distortion term
k4k_{4}fourth radial distortion term

Common Use Cases

Any camera with a fisheye lens below 140° diagonal field of view will probably benefit from this model.

Modeling

X=X/ZY=Y/Zr2=X2+Y2θ=atan(r)θd=θ(1+k1θ2+k2θ4+k3θ6+k4θ8)Xdist=(θdr)XYdist=(θdr)Yxc=fXdist+cxyc=fYdist+cy\begin{align} X' &= X / Z \\ Y' &= Y / Z \\ r^2 &= X'^2 + Y'^2 \\ \theta &= atan(r) \\ \theta_d &= \theta (1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8) \\ X_{dist} &= (\frac{\theta_d}{r}) X' \\ Y_{dist} &= (\frac{\theta_d}{r}) Y' \\ x_{c} &= f X_{dist} + c_{x} \\ y_{c} &= f Y_{dist} + c_{y} \\ \end{align}

Correcting

Correcting for OpenCV Fisheye is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal.

OpenCV Rational

Original Reference

OpenCV.org. Camera Calibration and 3D Reconstruction documentation. OpenCV 4.10-dev. https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a

MetriCal keyword: opencv_rational

Type: Modeling

OpenCV Rational is the full distortion model used by OpenCV. It is an extension of the RadTan model, in which the radial distortion is modeled as a rational function.

ParameterDescription
fffocal length (px)
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)
k1k_{1}first radial distortion term
k2k_{2}second radial distortion term
k3k_{3}third radial distortion term
p1p_{1}first tangential distortion term
p2p_{2}second tangential distortion term
k4k_{4}fourth radial distortion term
k5k_{5}fifth radial distortion term
k6k_{6}sixth radial distortion term

Common Use Cases

This model is mostly equivalent to OpenCV RadTan. Given the number of parameters, this model can overfit on the available data, making it tricky to generalize. However, if you know your camera benefits from this specification, it does offer additional flexibility.

As this is a modified version of OpenCV RadTan, we recommend its use for lenses with a field of view of 90° or less.

Modeling

X=X/ZY=Y/Zr2=X2+Y2r=(1+k1r2+k2r4+k3r6)(1+k4r2+k5r4+k6r6)Xdist=rX+2p1XY+p2(r2+2X2)Ydist=rY+2p2XY+p1(r2+2Y2)xc=fXdist+cxyc=fYdist+cy\begin{align} X' &= X / Z \\ Y' &= Y / Z \\ r^2 &= X'^2 + Y'^2 \\ r' &= \frac{(1 + k_1 r^2 + k_2 r^4 + k_3 r^6)}{(1 + k_4 r^2 + k_5 r^4 + k_6 r^6)} \\ X_{dist} &= r' X' + 2p_1X'Y' + p_2(r^2 + 2X'^2) \\ Y_{dist} &= r' Y' + 2p_2X'Y' + p_1(r^2 + 2Y'^2) \\ x_{c} &= f X_{dist} + c_{x} \\ y_{c} &= f Y_{dist} + c_{y} \\ \end{align}

Correcting

Correcting for OpenCV Rational is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal.

Pinhole with (Inverse) Brown-Conrady

Original Publication

A. E. Conrady, Decentred Lens-Systems, Monthly Notices of the Royal Astronomical Society, Volume 79, Issue 5, March 1919, Pages 384–390, https://doi.org/10.1093/mnras/79.5.384

MetriCal keyword: pinhole_with_brown_conrady

Type: Correcting

This model is the first of our inverse models. These models correct for distortion terms in the image space, rather than modeling the effects of distortion in the world space. If you're just looking to correct for distortion, this could be the model for you!

Notice that the model parameters are identical to those found in OpenCV RadTan. This is because both models are based off of the Brown-Conrady approach to camera modeling. Don't mix up one for the other!

ParameterDescription
fffocal length (px)
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)
k1k_{1}first radial distortion term
k2k_{2}second radial distortion term
k3k_{3}third radial distortion term
p1p_{1}first tangential distortion term
p2p_{2}second tangential distortion term

Common Use Cases

If you're just correcting for distortion, rather than modeling it, this model is a good choice.

Modeling

Modeling distortion for Inverse Brown-Conrady is a non-linear process. The most common method is to run a non-linear optimization to find the distorted point. This is the method used in MetriCal.

Correcting

r2=xc2+yc2r=(k1r2+k2r4+k3r6)xcorr=rxc+p1(r2+2xc2)2p2xcycycorr=ryc+p2(r2+2yc2)+2p1xcycX^=xcxcorrY^=ycycorrZ^=f\begin{align} r^2 &= x_{c}^2 + y_{c}^2 \\ r' &= (k_1 r^2 + k_2 r^4 + k_3 r^6) \\ x_{corr} &= r'x_{c} + p_1(r^2 + 2x_{c}^2) \cdot 2p_2x_{c}y_{c} \\ y_{corr} &= r'y_{c} + p_2(r^2 + 2y_{c}^2) + 2p_1x_{c}y_{c} \\ \hat X &= x_{c} - x_{corr} \\ \hat Y &= y_{c} - y_{corr} \\ \hat Z &= f \end{align}

Pinhole with (Inverse) Kannala-Brandt

Original Publication

J. Kannala and S. S. Brandt, "A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1335-1340, Aug. 2006, doi: 10.1109/TPAMI.2006.153.  https://oulu3dvision.github.io/calibgeneric/Kannala_Brandt_calibration.pdf

MetriCal keyword: pinhole_with_kannala_brandt

Type: Correcting

Inverse Kannala-Brandt follows the same paradigm as our other Inverse models: it corrects distortion in image space, rather than modeling it in world space.

Inverse Kannala-Brandt is close to the original Kannala-Brandt model, and therefore shares the same set of distortion parameters as OpenCV Fisheye. Don't get them mixed up!

ParameterDescription
fffocal length (px)
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)
k1k_{1}first radial distortion term
k2k_{2}second radial distortion term
k3k_{3}third radial distortion term
k4k_{4}fourth radial distortion term

Common Use Cases

If you're just correcting for distortion, rather than modeling it, this model is a good choice for lenses with a field of view of 140° or less.

Modeling

Modeling distortion for Inverse Kannala-Brandt is a non-linear process. The most common method is to run a non-linear optimization to find the distorted point. This is the method used in MetriCal.

Correction

r2=xc2+yc2θ=atan(rf)r=θ(1+k1θ2+k2θ4+k3θ6+k4θ8)xcorr=r(xcr)ycorr=r(ycr)X^=xcxcorrY^=ycycorrZ^=f\begin{align} r^2 &= x_{c}^2 + y_{c}^2 \\ \theta &= atan(\frac{r}{f}) \\ r' &= \theta (1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8) \\ x_{corr} &= r' (\frac{x_{c}}{r}) \\ y_{corr} &= r' (\frac{y_{c}}{r}) \\ \hat X &= x_{c} - x_{corr} \\ \hat Y &= y_{c} - y_{corr} \\ \hat Z &= f \end{align}

EUCM

Original Publication

Khomutenko, B., Garcia, G., & Martinet, P. (2016). An Enhanced Unified Camera Model. IEEE Robotics and Automation Letters, 1(1), 137–144. doi:10.1109/lra.2015.2502921. https://hal.science/hal-01722264/document

MetriCal keyword: eucm

Type: Unified

EUCM stands for Enhanced Unified Camera Model (aka Extended Unified Camera Model), and is a riff on the Unified Camera Model. This model is "unified" because it offers direct calculation of both modeling and correcting for distortion. At lower distortion levels, this model naturally degrades into a pinhole model.

This model is also unique in that it primarily operates on camera rays, rather than requiring a certain focal length or pixel distance to mathematically operate. The Modeling and Correction operations below will convert camera rays to — and from — a distorted state. Users who wish to operate in the image plane should convert these homogeneous coordinates to a Z^\hat Z matching the focal length.

ParameterDescription
fffocal length (px)
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)
α\alphaThe distortion parameter that relates ellipsoid and pinhole projection
β\betaThe distortion parameter that controls the ellipsoid shape

Common Use Cases

EUCM is (at time of writing) becoming a popular choice for cameras with strong radial distortion. Its ability to model distortion in a way that is both accurate and efficient makes it a good choice for many applications. It is capable of handling distortions for lenses with a field of view greater than 180°.

Modeling

d=(β(X^2+Y^2)+Z^2)γ=1αs=1αd+γZ^X^dist=X^sY^dist=Y^sZ^dist=1xc=fX^dist+cxyc=fY^dist+cy\begin{align} d &= \sqrt{(\beta (\hat X^2 + \hat Y^2) + \hat Z^2)} \\ \gamma &= 1 - \alpha \\ s &= \frac{1}{\alpha d + \gamma\hat Z} \\ \hat X_{dist} &= \hat X s \\ \hat Y_{dist} &= \hat Y s \\ \hat Z_{dist} &= 1 \\ x_{c} &= f \hat X_{dist} + c_{x} \\ y_{c} &= f \hat Y_{dist} + c_{y} \\ \end{align}

Correction

(mx,my)=(X^dist/Z^dist,Y^dist/Z^dist)r2=mx2+my2γ=1αmz=1βα2r2α1(αγ)βr2+γs=1r2+mz2X^=mxsY^=mysZ^=mzs\begin{align} (m_x, m_y) &= (\hat X_{dist} / \hat Z_{dist}, \hat Y_{dist} / \hat Z_{dist}) \\ r^2 &= m_x^2 + m_y^2 \\ \gamma &= 1 - \alpha \\ m_z &= \frac{1 - \beta \alpha^2 r^2}{\alpha \sqrt{1 - (\alpha \gamma)\beta r^2} + \gamma} \\ s &= \frac{1}{\sqrt{r^2 + m_z^2}} \\ \hat X &= m_x s \\ \hat Y &= m_y s \\ \hat Z &= m_z s \end{align}

Double Sphere

Original Publication

Usenko, V., Demmel, N., & Cremers, D. (2018). The Double Sphere Camera Model. 2018 International Conference on 3D Vision (3DV). doi:10.1109/3dv.2018.00069  https://arxiv.org/abs/1807.08957

MetriCal keyword: double_sphere

Type: Unified

Double Sphere is the newest model on this page. Like EUCM, it also offers direct computation of both modeling and correction on camera rays. However, it uses two spheres to model the effects of even strong radial distortion. At lower distortion levels, this model naturally degrades into a pinhole model.

ParameterDescription
fffocal length (px)
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)
ξ\xiThe distortion parameter corresponding to distance between spheres
α\alphaThe distortion parameter that relates second sphere and pinhole projection

Common Use Cases

Its use of projection through its "double spheres" makes it an ideal model for ultra-wide field of view lenses. The lenses tested in its original publication had fields of view ranging from 122° to 195°!

Modeling

d1=(X^2+Y^2+Z^2)Zmod=ξd1+Z^d2=(X^2+Y^2+Zmod2)s=1αd2+(1α)ZmodX^dist=X^sY^dist=Y^sZ^dist=1xc=fX^dist+cxyc=fY^dist+cy\begin{align} d_{1} &= \sqrt{(\hat X^2 + \hat Y^2 + \hat Z^2)} \\ Z_{mod} &= \xi d_{1} + \hat Z \\ d_2 &= \sqrt{(\hat X^2 + \hat Y^2 + Z_{mod}^2)} \\ s &= \frac{1}{\alpha d_2 + (1 - \alpha) Z_{mod}} \\ \hat X_{dist} &= \hat X s \\ \hat Y_{dist} &= \hat Y s \\ \hat Z_{dist} &= 1 \\ x_{c} &= f \hat X_{dist} + c_{x} \\ y_{c} &= f \hat Y_{dist} + c_{y} \\ \end{align}

Correcting

(mx,my)=(X^dist/Z^dist,Y^dist/Z^dist)r2=mx2+my2mz=(1α2r2)α1(2α1)r2+(1α)s=mzξ+mz2+(1ξ2)r2mz2+r2X^=mxsY^=mysZ^=mzsξ\begin{align} (m_x, m_y) &= (\hat X_{dist} / \hat Z_{dist}, \hat Y_{dist} / \hat Z_{dist}) \\ r^2 &= m_x^2 + m_y^2 \\ m_z &= \frac{(1 - \alpha^2 r^2)}{\alpha \sqrt{1 - (2\alpha - 1) r^2} + (1 - \alpha)} \\ s &= \frac{m_z \xi + \sqrt{m_z^2 + (1-\xi^2) r^2}}{m_z^2 + r^2} \\ \hat X &= m_x s \\ \hat Y &= m_y s \\ \hat Z &= m_z s - \xi \end{align}

Omnidirectional (Omni)

Original Publication

Mei, C. and Rives, P. Single View Point Omnidirectional Camera Calibration from Planar Grids. Proceedings 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 2007, pp. 3945-3950, doi: 10.1109/ROBOT.2007.364084. https://www.robots.ox.ac.uk/~cmei/articles/single_viewpoint_calib_mei_07.pdf

MetriCal keyword: omni

Type: Modeling

The Omnidirectional camera model is designed to handle the unique distortion profile of a catadioptric camera system, or one that combines mirrors and lenses together. These systems are often used for camera systems that seek to capture a full 360° field of view (or close to it).

In "A Unifying Theory for Central Panoramic Systems and Practical Implications" (Geyer, Daniilidis), the authors show that all mirror surfaces can be modeled with a projection from an imaginary unit sphere onto a plane perpendicular to the sphere center and the conic created by the mirror. The Omnidirectional camera model codifies this relationship mathematically.

Yes, it's a little confusing

We won't go into it here, but the papers linked above explain it well.

The Omnidirectional camera model also implements the same radial and tangential distortion terms as OpenCV RadTan. However, while OpenCV RadTan uses 3 radial distortion terms, this only uses 2. The reason for this? Everyone else did it (even the authors' original implementation), so now it's convention.

ParameterDescription
ffgeneralized focal length (px). This term is not a "true" focal length, but rather the camera focal length scaled by a collinear factor η\eta that represents the effect of the mirror
cxc_{x}principal point in x (px)
cyc_{y}principal point in y (px)
ξ\xithe distance between the unit sphere and the "projection" sphere upon which the focal plane is projected
k1k_{1}first radial distortion term
k2k_{2}second radial distortion term
p1p_{1}first tangential distortion term
p2p_{2}second tangential distortion term

Common Use Cases

The omnidirectional camera model is best used for extreme fields of view. Anything greater than 140° would be well-served by this model.

Modeling

X=[X,Y,Z]XS=XXXC=[XS,YS,ZS+ξ]X=XC/ZCY=YC/ZCr2=X2+Y2r=1+k1r2+k2r4Xdist=rX+2p1XY+p2(r2+2X2)Ydist=rY+2p2XY+p1(r2+2Y2)xc=fXdist+cxyc=fYdist+cy\begin{align} \mathbf{X} &= [X, Y, Z] \\ \mathbf{X}_S &= \frac{\mathbf{X}}{\lVert \mathbf{X} \lVert} \\ \mathbf{X}_C &= [X_S, Y_S, Z_S + \xi] \\ X' &= X_C / Z_C \\ Y' &= Y_C / Z_C \\ r^2 &= X'^2 + Y'^2 \\ r' &= 1 + k_1 r^2 + k_2 r^4 \\ X_{dist} &= r' X' + 2p_1X'Y' + p_2(r^2 + 2X'^2) \\ Y_{dist} &= r' Y' + 2p_2X'Y' + p_1(r^2 + 2Y'^2) \\ x_{c} &= f X_{dist} + c_{x} \\ y_{c} &= f Y_{dist} + c_{y} \\ \end{align}

Correcting

Though the Omnidirectional model technically has a unified inversion, the introduction of the radial and tangential distortion means that correcting is a non-linear process. The most common method is to run a non-linear optimization to find the corrected point. This is the method used in MetriCal.