From time to time something comes along that bothers you. You know that in the big scheme of things it’s a trivial matter and that it shouldn’t bother you quite as much as it does. When it happens again it really bothers you. If it becomes a habitual occurrence then sooner or later you are going to feel the need to get it off your chest. I have arrived at that point and feel compelled to say so. I think that’s ok every now and then, right?
My primary complaint is this: ’accuracy’ and ‘precision’ are not synonyms; they are not the same thing.
‘Accuracy’ means how close something is to being correct, whereas ‘precision’ means how specifically something is described. It is therefore entirely possible to be very precise, perhaps to several decimal places, but at the same time completely wrong i.e. inaccurate.
Similarly, it is possible to be accurate without being precise. In engineering we almost always prefer accuracy to precision, particularly when added precision makes little or no difference to the level of accuracy.
Let me now expand a little and explain why this bothers me; I assure you it isn’t to be pedantic.
If I ask someone to calculate the amount of force at the base of a column or the amount of deflection in a concrete floor slab I am definitely looking for an accurate number not a precise one. I do not expect the answer to be reported with a precision that cannot be justified.
For example, supposing I report that the force in a concrete column is 14,976.35 kN. I am effectively making the claim that I know the magnitude of the force to the nearest 1/100 of 1 kN, even although the magnitude of the force is almost 15,000 kN. This implies an accuracy of almost 1/1,500,000 and a precision of 0.01.
This is clearly nonsense. I do not know the density of concrete to a precision of two decimal places nor can I measure the floor thickness to that precision. It follows that the reported output is more precise than the input; that cannot be right. While the answer is precise, it is no more accurate than if we had rounded the last four digits.
The concept of significant digits is supposed to help us solve this problem, so perhaps it’s worth a quick recap; just to get it off my chest.
In any number a significant digit is any digit from 1 to 9 or any zero that is not used to show the position of the decimal point. For example:
345, 8.62, 3.80 and 0.00654.
Each of these numbers have three significant digits. So far so good, but we’re not done yet, because there are still some cases where we need to clarify how to treat the zeros.
Let us use the number 76,000 as an example. Should we read this as having two significant digits with the three zeros defining the position of the decimal or is one or more of the zeros accurate giving us three, four or five significant digits?
There is of course a convention for dealing with this, which is unfortunately seldom applied. If we write the same number 76x103 then we immediately know there are only two significant digits. If it had been written as 760x102 then we could assume three significant digits and so on.
For easier manipulations during a calculation it is convenient to work with exponents that are multiples of three, at least it is in the SI system [our American cousins may not find this as useful], however the final answer should be converted back into the requisite number of significant digits in order to clarify the accuracy of the output.
Now, if you thought this post was trivial thus far its about to get worse. We need to talk about rounding and why most calculators and computer software does it wrong. I am talking to you Microsoft.
The way in which we set aside insignificant digits is called rounding. The rules for rounding are straightforward. If the number being discarded is less than 5 we round down and if it is greater than 5 then we round up. For example:
456.33 rounded to 4 significant digits becomes 456.3. Rounding 674.68 to 4 significant digits we get 674.7
The tricky decision is what to do when the insignificant digit is 5. “Round it up”, I hear you say. That is what Microsoft Excel would do, however I say there is a better way.
An improved rule would be to round off to the even digit. For example 43.25 rounds to 43.2, whereas 43.35 rounds to 43.4[1].
The reason for this rule, as apposed to always rounding up, is to stop rounding errors from accumulating, particularly in long calculations. Since odd and even digits occur in a more or less random sequence the rounding up cancels out the rounding down. It is therefore a better way of calculating an answer.
We are now full circle and back to where we started. My request is this: Please select a number of significant digits [and round the insignificant ones] so that the answer isn’t more precise than the input. Choose accuracy over precision.
The primary advantages to this approach are:
- You are far less likely to make a mistake if you're not carrying all those digits.
- You’ll do the calculation quicker if you’re not carrying all those digits.
- Most importantly, you won’t look daft on a construction site when you ask the contractor to measure the floor thickness to two decimal places.
[1] I realise someone smarter than me is probably about to point out that there is a menu option in excel that can fix my gripe. In which case my argument is softened, but not defeated. Why isn’t it the default?
p.s. Microsoft, please don’t get upset and crush my blog.