10

Lets say you have a measurement device of which you do not know the accuracy, and a reference measurement device. Both measure a variable $x$. The range of interest is $x_0<x < x_1$. How do you determine the accuracy of the unknown device in this range?

My course of action would be to gather values for both devices from $x_0$ to $x_1$ and building a distribution of errors. The accuracy could then be the error span, $\pm3\sigma$ or something similar - is this correct?

Assumptions:

  • The reference measurement device is calibrated and has virtually no error
JHK
  • 1,638
  • 2
  • 19
  • 29

3 Answers3

3

The only way to determine the accuracy to which any measuring device provides measurements is to calibrate it against a device of known accuracy and known errors for measurements.

You technique is partially correct; don't just do the error measurement for the limits of the device as one population or sample bin. This is because measurement errors are not always uniform.

For example, for readings between 0 & 1, the error might be -0.2 and for readings between 2 & 3 the error might be +0.6. Your testing must be done in ranges or bands, irrespective of whether the units are mm (for rulers), m/s (for anemometers or speedometers) or Pa (for barometers).

For each range/band you determine the error for that range/band and then apply that error to the measurement taken from the device that needed calibrating.

Fred
  • 9,782
  • 13
  • 36
  • 48
2

Your approach is broadly correct.

If you are only interested in the accuracy of your system you probably want to use something like the maximum error. Your accuracy is then +/- Max error with the assumption that real errors are uniformly distributed within this range (a uniform distribution will often be an overestimation but is a simple option when no better information is available).

However, this approach will often produce large errors due to systematic effects which can easily be corrected by fitting a curve (normally linear) through the plot of measured and true values.

This should correct for the bias in your instrument and you can then calculate the uncertainty based on the standard deviation of the residuals. The total uncertainty is normally a multiple of $\sigma$, the choice is fairly arbitary, so you should state the multiple (k value), or the associate coverage factor.. You should also state what distribution you are assuming as this will effect what multiple gives a specific coverage. E.g. For a Gaussian 95 % coverage k~2, but for a uniform distribution 95 % coverage k~1.68

nivag
  • 1,035
  • 8
  • 14
1

I was on a team of quality engineers (but not one of the experts), and they had a visual where they used a 2d plot where X axis was first measurement and Y was second measurement of the same observable feature.

They would repeat the measure/remeasure and create what they called a "sausage chart". They would eliminate the outlying 2% of samples and draw a "sausage" around the rest.

You could visually see the quality of the measurement system by observing how close the data points fell to the 45deg angle line.

Baronz
  • 113
  • 6