Approximation in Science and Engineering

One of the more interesting skills learned as an engineer or a scientist is the art/skill (it’s really both) of being able to make reasonable order-of-magnitude estimates. Using a combination of knowledge, common sense, reasoning, intuition, and some quick hand calculations, a skilled engineer/scientist will often try and estimate the critical parameter or result of some system before going through the tedious calculations to get a more accurate answer. This is useful for several reasons: it gives you an idea of what orders of magnitude you will be dealing with, it gives you a ‘sanity check‘ for when you calculate a more accurate answer, and most importantly, often times an order of magnitude estimate is all that you really need.

This process is sometimes called handwaving, back-of-the-envelope calculation, guesstimate, or a ballpark estimate. They are also sometimes called Fermi problems, since the physicist Enrico Fermi was renowned for being able to perform such simple estimates and get within a factor of 2 or 3 of the actual answer (this is extremely good for a simple estimate). Two famous examples of are his calculation for how many piano tuners there are in Chicago, and his estimation of the energy yield of the first atomic bomb test by dropping some scraps of paper and seeing how far the blast blew them.

Often times when you are making such estimates, you simply round (logarithmically) each number to it’s nearest power of 10. This is because your estimates are off by factors at least that large anyway, so there’s little point in carrying through precise numerical calculations. A personal favorite anecdote on this principal came from my brother Porter when explaining why he replaced \(\pi \) with the number 1, “Why did I make \(\pi \) equal to 1? Because it’s not 10.” (In actuality though, since \(\sqrt{10}=3.16 \) and \(\pi=3.14 \), \(\pi \) is right close to the dividing line between assigning it to the value of 1 or 10. You can choose either, or just make it 3, which is what I usually do.)

The reason why I’m talking about this is because today I stumbled upon this page for a class at MIT that deals entirely with this subject. The name of the course? “Lies and Damn Lies: The Art of Approximation in Science”. I skimmed through the first chapter and it was very well-written and interesting. You could learn a lot about this skill just by reading the chapter and working through the problems yourself.

Another closely related but slightly more accurate method of estimation uses dimensional analysis. The wikipedia article I linked to is a bit obtuse for the uninitiated, but the two examples halfway through the article are fairly simple to follow and stand well on their own. For dimensional analysis you use your knowledge of the underlying physics of a system to make reasonable assumptions about what parameters are pertinent in your analysis. A more mathematically formalized version of this is called the Buckingham π theorem, and is an extremely useful and versatile tool for the initial analysis of a system.

My favorite example is at the end of the article, where it shows how Geoffrey I. Taylor used the Buckingham π theorem to estimate, again, the energy output of the first atomic explosion. A summation of his original paper with his analysis can be found online. Basically he was able to determine the following relationship:

\(\displaystyle{R\approx\left(\frac{Et^2}{p}\right)^{1/5}}\) ,

where R is the radius of the exploding shockwave at some time after detonation, t is the time, p is the atmospheric pressure and E is the energy released upon detonation. He used recently declassified movies of the explosion to get the radius and time values, and that allowed him to estimate the energy. In fact, when he published his results it caused quite a commotion in the US Defense Department because the energy output of the atomic bomb was still classified at the time and Taylor’s result was far too accurate for their liking!

This entry was posted in Science. Bookmark the permalink.

2 Responses to Approximation in Science and Engineering

  1. Elwin says:

    This reminds me of an interesting colloquium I once attended called “How far can a Grey Goose fly?” It was about the same topic (order of magnitude estimations), and actually went into some detail about how your answers can be more accurate when you add more steps inbetween. The piano tuner example on the Wiki page, for instance, has 6 steps. In general, the more steps your process has, the more likely it is that your overestimations will cancel out your underestimations, and lead to a generally correct guesstimation. Interesting stuff.

    On a semi-unrelated note, all of your links on this post (except for the MIT one) seem to be bad links; they are all linking to http://http://, which causes my browser to complain and not go anywhere useful.

  2. admin says:

    Thanks for pointing out my bad links, Elwin. The error cancellation is an interesting perk of this kind of analysis, It’s known as Stein’s example or Stein’s paradox. The first chapter of the MIT lecture I linked to goes through a problem for estimating how much money an armored car can carry, and shows the error analysis at the end to show that it’s better than just a simple guess.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.