“How can we scientifically analyze a number for it’s accuracy?”
Believe it or not, scientific numbers are very different from mathematical numbers. This may sound absurd at first, but if you read on then it will start to make sense. In mathematics, there is no real difference between the number 1 and 1.0 and even 1.00000 for that matter. But when working in science, these numbers are anything but interchangeable! But why is that so?
Well, it’s all because scientists and engineers have to deal with something called accuracy. When working with empirically derived numbers such as the mass on a scale, it’s impossible to know the true value of a measurement. So each number one works with has a certain level of accuracy to it. So to quantify this accuracy, we use a tool called significant figures, and they follow a certain set of rules.
Each number that we care about is termed a significant figure, or sig fig for short. All non zero numbers are significant (as they represent a quantity), all zeroes between two significant figures are significant (as the number will not be able to be simplified), and the numbers trailing after a sig fig and decimal point will be accurate (as they measure the level of accuracy of our measurements).
Let’s do a few examples. 400 has only 1 significant figure, (the four is a non zero integer, and none of the zeroes are “sandwiched” in between other significant figures and are before a decimal point).404 has 3 significant figures (The zero is sandwiched between two o non-zero numbers) 4 has only 1 significant figure (The only number is four, a non zero integer). 4.00 actually has 3 significant figures (Both of the zeroes are behind the decimal). .040 only has 2 significant figures (The first zero is behind the zero but not behind any non-zero integers).