Calibration can be described as:
"A statistical optimization process that aims to solve component model, constraints, and object space collectively."
This means that we have many parameters to solve for, many different observations, and an expectation of statistical soundness.
In an ideal world, all of our observations would be independent and non-correlated, i.e. one parameter's value wouldn't affect other parameters. However, this is rarely the case. Many of our component's parameters will be correlated with both observations and other parameters.
This also means that the errors are affected between correlated parameters. When errors in the determination of one parameter cause errors in the determination of a different parameter, we call that projective compensation. Errors in the first parameter are being compensated for by projecting the error into the second.
Projective compensation can happen as a result of:
- Poor choice in model. If a parameter chosen to model your components conflates many other parameters together, then these parameters cannot be optimized in a statistically independent manner.
- Poor data capture. Because the calibration process reconstructs the "object space" from component observations, the data collection process influences how the calibration parameters are determined through the optimization process.
It's hard to directly measure projective compensation in an optimization with the statistical tools we have today. It is possible to use output parameter covariance to indirectly observe when projective compensation happens. Furthermore, depending on what parameters are correlated, we can often inform the modeling or calibration process to reduce this effect.
Despite it being possible to detect when projective compensation is occurring (and it is almost always occuring to some degree or another), it is more useful to understand the different results and metrics that MetriCal outputs and how those can be used to signal what kinds of projective compensation may be present.