Core principles of metrology
Metrology has three basic principles:
- Traceability: Metrological traceability is a property of a measurement that relates the measured value to a stated metrological reference through an unbroken chain of calibrations or comparisons. It requires that for each step in the traceability chain that uncertainties are evaluated and that methodologies are documented. For Earth Observation traceability includes the formal calibration traceability of the instrument to appropriate references in preflight, in-orbit and vicarious calibrations, including the metrological traceability of any references (e.g. in situ observations).
- Uncertainty Analysis: Uncertainty analysis is the review of all sources of uncertainty and the propagation of that uncertainty through the traceability chain. It is based on the Guide to the Expression of Uncertainty in Measurement (the GUM). For Earth Observation, uncertainty analysis needs to consider not only the calibration of satellite data (e.g. for L1 processing) but also the propagation of those uncertainties through retrieval algorithms for L2 and above processing.
- Comparison: Metrologists validate uncertainty analysis and confirm traceability through comparisons. The Mutual Recognition Arrangement requires regular, formal international comparisons that are conducted under strict rules. Earth Observation comparisons are carried out between sensors and between sensors and in situ observations using simultaneous observations, transfer standards (e.g. ground observations or pseudo-invariant sites to perform double differences), or zonal and global averages. Traditionally Earth Observation comparisons have been performed to estimate inter-sensor differences, we are only beginning to use them to validate uncertainties.
- ESA FRM programme for examples of improved traceability of in situ observations
- QA4EO Framework: Quality Assurance Framework for Earth Observation
- The BIPM website for information about world metrology and access to the GUM and its supplements
- NPL e-Learning modules on Introduction to measurement uncertainty and Introduction to metrology, or equivalent PDF documents.
- Metrology for Earth Observation (MetEOC) project series: text book and eLearning course on uncertainty analysis for Earth Observation.
- An introduction to uncertainty in measurement by Les Kirkup and Bob Frenkel (book)
Uncertainty and errors
The terms ‘error’ and ‘uncertainty’ are not synonyms, although they are often confused. To understand the distinction, consider the result of a measurement – the measured value. The value will differ from the true value for several reasons, some of which we may know about. In these cases, we apply a correction. A correction is applied to a measured value to account for known differences, for example the measured value may be multiplied by a gain determined during the instrument’s calibration, or a measured optical signal may have a dark reading subtracted. This correction will never be perfectly known and there will also be other effects that cannot be corrected, so after correction there will always be a residual, unknown error – an unknown difference between the measured value and the (unknown) true value. The specific error in the result of a particular measurement cannot be known, but we describe it as a draw from a probability distribution function. The uncertainty associated with the measured value is a measure of that probability distribution function; in particular, the standard uncertainty is the standard deviation of the probability distribution. There are generally several ‘sources of uncertainty’ that jointly contribute to the uncertainty associated with the measured value. These will include uncertainties associated with the way the measurement is set up, the values indicated by instruments, and residual uncertainties associated with corrections applied. The final (unknown) error on the measured value can be considered to be drawn from the overall probability distribution described by the uncertainty associated with the measured value. This is built up from the probability distributions associated with all the different sources of uncertainty. (Strictly the GUM describes uncertainty as the dispersion of values that could reasonably be attributed to the measurement. In this way it is not strictly from the probability distribution of errors around a true value, but a distribution of possible values around the measured value. Thus, the explanation above is an oversimplification that is not strictly GUM compliant. It can, however, clarify the distinction between “error” and “uncertainty”).
- See later material on sensor effects and error correlation
- See FIDUCEO tutorial on measurement functions
For a vocabulary of common metrological terms see:
Rules of notation
As with all disciplines, vocabulary and notation have very specific meanings in metrology. When we work in multidisciplinary teams, issues of vocabulary and notation may seem, originally to be a barrier, but, with care, can open to a greater mutual understanding.
These rules are based on the ISO 80000 standard series.
- In equations, italics represent a variable/quantity. Upright text is used for labels, e.g. for radiance of the Earth, or for the wavelength
- Write spaces before and between units, use a non-breaking space to stop the units being on a different line to the measured value. E.g. and . Ideally avoid using the / in units.
- Units are always written with a small letter when written out in full (e.g. 300 kelvin) except for “degrees Celsius”
- Unit symbols are always in upright type.