Science


Two posts ago I showed how many coin flips it would take in order to have a 98% confidence of getting 92 heads in a row (à la Rosencratz and Guildenstern Are Dead). The answer turns out to be 3.874\times 10^{28} coin flips, which if you tried to do by yourself, it would take 100 billion times longer than the current age of the universe. Since I mentioned that only Wowbagger, the Infinitely Prolonged could pull off such a stunt, I think it’s only fair that any other solution also be Adamsian, at least in practicality if nothing else.

My friend Spencer proposed a Dyson sphere to power a huge number of coin-flipping robots. I think he’s on the right track and I had similar thoughts, however my ideas are a bit larger in scale and less detailed. I won’t go into detail on what kind of Dyson Sphere is best or such, since even simplistic models are fraught with difficulties and instabilities (a nice page talking about Dyson Spheres and some simple analysis is here). Instead, let’s just say that we can build some kind of large Dyson network in order to capture a significant portion of the Sun’s energy. We’ll be conservative and say that after light capture, conversion to useful energy, and then maintenance, etc. we can use 10% of the Sun’s radiant energy to power an array of coin-flipping robots.

Spencer also mentioned the concern that once you have good enough robots, that coin flipping is no longer random: exactly precise robots flipping exactly precise coins in the exactly precise way will give the exact same result every time. That may be the case, but we’ll assume that the robots and coins are made imprecise enough that there will be enough random variance in between all the robots to make the system truly random and fair (this is in all reality probably impossible, but we’re in Adam’s universe so we’ll assume it can be done anyway).

Since the robots don’t have to do anything but flip coins and report the outcome, we’ll say each robot consumes about as much power as a toaster oven, or 1000 W. The sun’s luminosity is 3.846\times 10^{23} W, so assuming we can use 10% of the sun’s energy we have:
\displaystyle{(0.1)\left (3.846\times 10^{26} W \right)\left (\frac {1\;\text {robot}} {1000\;W} \right) = 3.846\times 10^{22}\;\text {robots}}
This many robots would give us the same number of flips every second, so that will give us the required number of flips in:
\displaystyle{\frac{\displaystyle{3.874\times 10^{28}\text{flips}}}{\displaystyle{3.846\times 10^{22}}\textstyle{\frac{\text{flips}}{s}}}=1.007\times 10^6s\approx 11\; \text{days and}\;14\; \text{hours}}.
Now that is a considerable improvement.

This potential solution does have some problems though, the most obvious being whether there is enough useful material in the entire solar system to build 3.846\times 10^{22} robots, plus the Dyson power grid to run the whole thing, plus a maintenance system to keep it all in good working order, etc. If we limit ourselves to just the easy to use material, like just the asteroid belt, that limits us to about \small{2\times 10^{21}} kg of mass. Assuming a total of 10 kg for each robot (including Dyson network power generation, infrastructure, maintenance, etc.), that limits us to just 2\times 10^{20} robots. This number of coin-flipping robots would then take 6.14 years to get the required \approx 3.874\times 10^{28} coin flips, which still isn’t bad at all. It might take several millenia to build the coin-flipping robot Dyson network, but once it was up and running you’d have your 92 heads in a row in just a few short years!

So lets say we let our coin flipping Dyson array keep running, say, until the Sun becomes a red giant in about 5 billion years, destroying our Dyson array. We would have

\displaystyle{\begin{array}{c}\left( 2\times 10^{20}\text{robots}\right) \left( \frac{\displaystyle { 1\;\text{flip}}}{\displaystyle { \text{robot}\;s}}\right) \left( \frac{\displaystyle { 3.1557\times 10^{7}\; s}}{\displaystyle {1\; \text{yr}}} \right) \left( 5\times 10^{9}\; \text{yr} \right) \\ \approx 3.2 \times 10^{32} \text{coin flips} \end{array}}

From this can we calculate how many coin heads in a row we can expect to get during this time? Our initial equation is
\displaystyle{F = 1-\left( 1-2^{-n}\right)^{0.5\,f}}
where F the confidence probability we we desire (we’ve been using 0.98, or 98%), n is the number of heads in a row, and f is the number of coin flips. Rearranging this for n we have:
\displaystyle{n = \frac{1}{\text{ln}\,2}\: \text{ln} \left( \frac{-0.5\,f}{\text{ln}(1-F)} \right)}

For some various confidence probabilities we have these results:

 \begin{array}{ l l } \textbf{\textit{F}} & \textbf{\textit{n}} \\ 0.9999 & 120 \\ 0.999 & 120 \\ 0.99 & 121 \\ 0.9 & 122 \\ 0.75 & 123 \\ 0.5 & 124 \end{array}

So what does this mean? You have a 99.99% chance of getting at least 120 heads in a row, pretty much guaranteed. However, you only have a 50% chance of getting up to 124 heads in a row. What gives? We go from flipping coins for 6 years to 5 billion years, and the only improvement we get is an additional 28 heads in a row? That’s because each additional head in a row has half the probability of occuring, so the probability decreases exponentially with a linear increase in number of heads required. Conversely, for an exponential increase in the number of coin flips, we see only a modest linear increase in number of expected heads in a row.

A couple of posts ago I talked about how laughably improbable it would be to get 92 heads in a row on a fair coin. To sum up, the probability is:

\displaystyle\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\cdots\left[\text{92 times}\right]\cdots\left(\frac{1}{2}\right)=2^{-92}=2.017\times 10^{-28}

The probability of this happening is so abysmally low that you could flip coins your entire life and never expect to see this happen. Or could you? How many times would you need to flip a coin to see a reasonable chance of this happening?

A pointless question? Most certainly. But trying to answer pointless questions that can be solved by math is one of the trademarks of a geek. So a quick review of probability and statistics lead me to the Geometric Distribution, which gives the probability of an event occurring after a given number of trials.

\displaystyle{P=p\left( 1-p\right)^{k-1}}

Here p is the probability of the event occurring once in one trial, i.e. 2.017\times 10^{-28}. k is the number of trials, and P is the probability of the event occurring once within k trials. However, this equation doesn’t quite give us the probability distribution we need. This function will give us the probability of getting exactly one event (heads 92 times in a row) out of k trials (flipping a coin 92 times in a row k times). We’re not interested in the probability of exactly one success, we’re interested in the probability of one or more successes. For that, we need the related Cumulative Distribution Function. Basically it’s the sum of the probabilities of 1, 2, 3,… up to k successes out of k trials. It’s actually pretty easy to derive without performing sums for arbitrarily large values of k. Since we want one or more successes, that means the only thing we don’t want is a failure for every trial. The probability of a single failure is simply 1-p, so the probability of k failures is \left( 1-p\right)^{k}. Since we want every possible combination except every trial a failure, we just subtract this from 1 (the sum of all possible combinations is of course equal to one). This function comes out to be:
\displaystyle{F=1- \left( 1-p \right)^k}

Choosing a reasonable number for F, we’ll select 0.98. Solving the above equation for k we get:
\displaystyle{k=\frac{\text{ln}\left( 1-F\right)}{\text{ln}\left( 1-p\right)}}

This equation is exact, but it has a big problem. No normal calculator or computer is going to be able to calculate the answer because of the denominator. The logarithm of one is zero, so the logarithm of a number very very close to one is a very very small number. But no normal calculator is going to be able to handle \text{ln}\left( 1-2^{-92}\right)\approx\text{ln}\left( 0.999999999999999999999999999798\right)
(Note I said no normal calculator. I used Mathematica for this and it works fine. But anything limited to double-precision arithmetic isn’t going to get you there.) Fortunately there is a convenient Taylor series expansion for \text{ln}\left( 1-x\right). It is

\displaystyle{\text{ln}\left( 1-x\right)=\sum_{i=1}^{\infty}\frac{x^i}{i}}

The first term in the series will give more than sufficient precision in this case, so we have
\displaystyle{k=-\frac{\text{ln}\left( 1-F\right)}{p}}

This tells us how many trials we we will need to to have a 98% confidence of getting 92 heads in a row, but it doesn’t tell us how many coin flips we will need. Now a trial is defined as 92 coin flips, and if all 92 are heads it is a success, otherwise it is a failure. However, we don’t need to do all 92 flips each time, as soon as we get our first tails, we already know that the trial is a failure and we can start over. Since a fair coin is going to result in tails half of the time, then that means on average we will have two flips for every trial. So if f is the total number of coin flips we will need to get a 98% confidence, then:
\displaystyle{f=-\frac{2\;\text{ln}\left( 1-0.98\right)}{2^{-92}}=3.874\times 10^{28}}

This is a very very big number. If we flipped a coin once a second, how long would it take us to get this number of flips?
\displaystyle{3.874\times 10^{28}\,\text{flips}\left( \frac{1\;\text{s}}{1\;\text{flip}}\right)\left( \frac{1\;\text{yr}}{3.1557\times 10^{7}\;\text{s}}\right)=1.22\times 10^{21}\;\text{yr}}

And how long is this? This is really, really long. Astronomers estimate the current age of the universe to be 1.373\times 10^{10} years old. That puts it as 100 billion times longer than the current age of the universe. This is a feat that could only be pulled off by, say, Wowbagger, the Infinitely Prolonged. According to this fascinating article on the eventual heat death of the universe, at this point there will be no matter left in the universe but white and black dwarfs (and black holes, but I don’t know if they’re considered to be matter within our universe or not), and they will be flung from their orbits due to gravitational radiation.

So is all lost? Is there no way to get 92 heads in a row? For all practical purposes, yes, there is no way. As for impractical purposes though, in a subsequent post I’ll detail a way that we could accomplish it long before the death of the universe using a scheme that in all reality would only make sense in a Douglas Adams’ universe.

I ran across this page that is an outline for a lecture given by a professor of statistics at Berkeley. The title of his talk is “The top ten things that math probability says about the real world”, but he just glosses over six of them and spends the majority of his lecture discussing the last four. Still, all of the points are valid and important, in fact a lot of his lecture covers subjects that are pet peeves of mine. But the one that never ceases to amaze me is is the title of my post: people are predictably irrational in actions involving uncertainty.

Take for example the opening scene in Rosencrantz and Guildenstern Are Dead, where they are flipping the coin. Rosencratz (or is it Guildenstern?) gets heads something like 92 times in a row. Now assuming a fair coin, the odds of that are laughably improbable: 2^{-92}=2.019\times 10^{-28} . You’d have a millions of times better chance of winning the lottery than achieving this feat. In fact, given a lottery that has a one in one billion chance of winning, you’d have a better chance of winning said lottery 3 times in a row then you would of getting 92 heads in a row on a fair coin.

Proof: \left(1\times 10^{-9}\right)^3=1\times 10^{-27}>2.019\times 10^{-28}.

But that’s not what’s important here. The issue in question is what people will predict the next coin flip to be. If they see the large number of successive identical coin flips, and you then ask them what the probability of the next flip also being heads is, they will usually give one of two answers: 1) It is most likely to be heads, because the coin is obviously ‘on a roll’ of heads. 2) It is most likely to be tails, because it’s had so many heads in a row that there is a ‘negative balance’ of tails that needs to be met. This is despite any and all assurances that the coin is perfectly fair. So the real answer is of course, 0.5 probability of heads, and 0.5 probability of tails. This is always true, no matter what the previous record of instances may be. The thing that many people fail to realize is this:

In any simple game of pure chance, every turn/round/instance is completely independent of previous turns, and and every single turn has the exact same probability every time. This is how casinos make the majority of their money.

So why are most people so predictably irrational in such situations? Obviously I’m not a psychologist (or other such similar profession, but see this slide from the end of the lecture), but I think it has to do with the fact that as humans, we almost never have to make judgments in situations where the outcome is truly random. Such situations have only arisen quite recently in human history with the advent of gambling. And even then there is only a subset of gambling games that are purely random (like craps or roulette, assuming they are truly fair) while many have a combination of chance and skill (card games fall into this category) and some are flat out not fair (slot machines).

In most everything that we deal with in daily life, even when there are events that seem random when we we observe them, they are almost never random. For example, take my daily bus commute. Even though the bus has a regularly scheduled time to arrive, from my perspective it appears random within a time frame of +/- 10 minutes. Also how long it take to arrive at school or home also appears to be random, with a total time of anywhere from 15 to 45 minutes, depending on traffic. But in reality, both when the bus comes and how long it takes to arrive at my destination are not random at all. The problem is that the number of variables that go into determining these two times are so vast and unpredictable that the end result may as well seem to be random when it isn’t.

Back to my former point though, I think we as humans tend to find pattern and correlation in many things (even when they don’t exist) because finding correlations and patterns is extremely useful. Such thought processes have fueled man’s scientific progress, and help humans navigate the dangerous minefield of social interaction. It has its downsides though. People losing lots of money in gambling is obvious, but also things like finding pictures of Mary or Jesus in just about anything, or the existence of most every pseudoscience out there (numerology, cryptozoology, paranormal phenomena, etc.).

Since it was on major news and aggregate sites yesterday, I can pretty much assume that all of the internet has seen yesterday’s story on the super mouse. Basically, researchers have found a way to genetically alter mice so that they have an astounding 10 times the normal number of mitochondria in their muscles! (Quick review: mitochondria are the fundamental energy plants of cells: they breakdown glucose from which they provide energy for all of the cell processes) These mice eat twice as much, but have half the body weight because they have almost no fat reserves. Recalling the movie Bladerunner you might think, “but they die at a young age, right?” Actually, they live longer than normal mice, and were still producing litters at three years of age – equivalent to over 80 years old in human terms.

So what are some possible ramifications? From the BBC article:

Other research groups have produced similar novel rodents by altering different aspects of their genetics. One criticism of the work is that it could open the door to abuse, with the spectre of athletes resorting to gene therapy to try to improve their performance.

But Professor Hanson played this down. “Right now, this is impossible to do – putting a gene into muscle. It’s unethical. And I don’t think you’d want to do this. These animals are rather aggressive, we’ve noticed.”

It makes the animals more aggressive? He thinks that is going to keep unscrupulous athletes from gene-doping? What hole has he been living in? This shows one of the major misgivings I have with medical research in general, and genetic research in specific: no thought for the consequences. The article briefly addresses the issue of athletes trying to gain a genetic advantage, but what about military applications? Who doesn’t think the pentagon or some other nation would pay top dollar for genetic ‘super soldiers’?

The truth of the matter is, no matter how much the public decries it as unethical, as soon as the technology becomes available there will be people trying to genetically engineer a ‘super soldier’ or a ‘super athlete.’ If nothing else, the sheer amount of money put into professional sports and military technology is an indicator that if the technology exists, there will be people willing to take the chance. You just have to genetically engineer a zygote, implant it into a surrogate mother from a 3rd world country who is willing to do such things for money, and have the baby brought to gestation at a facility in a ‘country of convenience’ (somewhere in eastern europe, SE asia, etc.). You now have your genetic superbaby. And what about when gene therapy through retroviruses becomes viable? You can bet there will be athletes willing to give it a chance.

Similarly, as soon as the technology to target and trigger genes for increased intelligence, beauty, immune system, etc. are found, there will be rich parents from all over the world willing to pay an exorbitant sum of money to ensure that their child is a perfect beautiful genius, no matter what laws may be in place to stop such genetic tampering. Even if you pass laws in the U.S. and other 1st-world nations, once the technology exists there will be clinics in India, Bulgaria, Georgia, and such places where the genetic treatment can be done.

I realize this technology has great potential to help cure all sorts of genetic and other health problems, but it really is a Pandora’s box. Professor Hanson’s statement seems incredibly naive to me. This is classic Pandora: he has too narrow of an idea of where and how his technology may be used or abused, and that others – even those with similar technical ability – may not share his sense of morality.

Today on the internet I saw the calculation for i^i, where i is the square root of -1. It can be solved pretty easily using Euler’s formula, where any number z in the complex plane can be expressed as
\textstyle{}

\displaystyle{z=r\,e^{i\,\theta}}

\displaystyle{}

where r is the absolute value of the number, e = 2.718281828\ldots is the base of the natural logarithm, i is of course \sqrt{-1}, and \theta is the angle in radians of the number measured counterclockwise from the positive real axis, essentially giving the complex number in polar coordinates. So for example 1\,e^{\textstyle{i\:0}}=1, 1\,e^{\textstyle{i\:\pi}}=1, 1\,e^{\textstyle{i\frac{\pi}{2}}}=i, etc. We know that i^i is going to be some number in the complex plane, so we can express that number as \textstyle{r\,e^{i\,\theta}=e^{\text{ln}(r)}e^{i\,\theta}}= e^{\text{ln}(r)+i\,\theta}=e^z, where z=\text{ln}(r)+i\,\theta is itself a complex number.
(more…)

Is I mentioned in the earlier post, I would post the answer later. Most people that were kind enough to comment on it also gave the correct answer. I thought the trick was pretty sneaky, so I wrote an email to the author of the website:

Cliff,
It took me a few minutes, but I really had to slap myself on the forehead when I realized the trick. The trick being that *none* of the six cards appear after you click the one of the six eyes. The five that appear after the click are all different. The original six has three red face cards: a king, a queen, and a jack, and three black face cards, also a king, a queen, and a jack. When you click on the eyes, then five face cards are shown. However, for each face card that was there before, the suit of each card changes to the other suit of the same color. A king of diamonds is replaced by a king of hearts, a jack of spades is replaced by a jack of clubs, etc. Then you show five of these cards instead of six, and no matter which of the six they chose, that card will not appear with the five. Since the person playing is only paying attention to the card that they picked, they only notice that the card they chose is no longer there: they don’t notice that all the other cards have changed because they weren’t paying attention to that detail. Quite an ingenius little trick.

I’m curious, what percentage of people figure out the trick to this? I’m sure you won’t post this reply (and I wouldn’t ask you to) on your site since it would ruin the trick for everyone, but I’m certainly not a genius and I highly doubt that I’m the first to figure it out.

Also, I find it hilarious that many people claim it didn’t work and thier card wasn’t removed! I guess they remembered thier own card wrong, or something like that.
–Derek

Alas, Cliff never wrote me back.

Reading this news article piqued my interest. What I found interesting is not that Japan’s foreign minister proclaimed that Japan was capable of producing a nuclear weapon should it wish to, but that such a declaration was even necessary.

Japan has been this way for several decades, I would assume. As an engineer, the way I see it is this. In order to produce a nuclear weapon, you need the following:

(more…)

I saw a link to this ESP test and on a whim checked it out. Try the test a few times to see how accurate it is. Then, see if you can figure it out. It had me stumped at first, but in a few minutes I figured out how it works. Once you think you’ve figured it out, post your answer here as a comment. If no one gets it (or if no one even replies, which is much more likely given the extremely limited readership of this blog) after a few days, then I’ll post the answer.

Hint: You can read through the postings on his site on people’s theories and explanations on how it works. However, all of the explanations given by the people there are wrong. In fact, it’s probably best not to read them until you have figured out the trick yourself.

I ran across this article that discusses the economic reality of why people in America don’t use the public transportation. I find it meaningful because he address the value of the commuter’s time, which he purports to be the most important variable and is usually ignored by those that try to get people to adopt mass-transit.

Basically, it boils down to this: if I can drive there faster than I can get there via mass-transit, I will drive because my time has real (i.e. economic) value to me. This is more pronounced the more money you make, because your time is appropriately more valuable. Also, even if it takes the same amount of time, most people will choose to drive because time spent by yourself in your car (where you can control your immediate environment) is worth more to people than time spent on a bus or train where you are crowded with other people you’d rather not be with if you had an alternative.

In his analysis he only discusses the US, comparing places like NY where it makes sense for most people to use mass transit, to Moab, UT where obviously no one uses mass transit. What I find most compelling about it though, is that even though he never mentions it, his tenets seem to apply perfectly to Japan, where mass transit is almost ubiquitous and is used by practically everyone.

Let’s look at some of his conclusions and see how well they apply to Japan and to my commute to school here in Austin:
(more…)

I’ve decided to try and use my communting time on the bus to and from campus every day more effectively. Up until now I have been reading books on my older Sony Clie, mostly Sci-Fi books that my brother emails to me in text format. There’s still a near-endless supply where that came from, but I’ve decided that I could put that time to better use by studying Japanese. I’ve always intended to learn technical Japanese, mostly because even though I have years of experience speaking Japanese and speak it primarily at home, I’m embarrased that I can’t even do the most simple explanation of my research or any chemical/scientific/mathematic discussion without being instantly flummoxed by my lack of pertinent vocabulary. Now I do have some photocopied pages from a textbook on technical Japanese terminology, but I don’t have the title page so I don’t know the actual name, author, or publisher of the book. So this is all I have to go with currently. Still, it’s a lot for me to learn before I go on to more.

So to make use of my Clie, I’ve found the nice little program KingKanji that lets you make and use flash cards to study on your palm. It works great and I love it. It comes with lots of vocabulary lists, but unfortunately they are all “Japanese 101-202” level stuff, so even reviewing them gets really boring. I was hoping that some enterprising Japanese student would have some vocabulary lists that I could use, but I didn’t find anything. So I’m making the flashcard files myself from the photocopied book that I have. To get an idea of the vocabulary I’m working with, here is the list on page 13: 化学的諸量 (various units of measurement in chemistry):

モル [モル] /mole (M, amount of substance)/
比容 [ひよう] /specific volume/
アボガドロ数 [アボガドロすう] /Avogadro’s number/
分子 [ぶんし] /molecule/
粒子 [りゅうし] /particle (not subatomic)/
原子番号 [げんしばんごう] /atomic number, Z/
…のモル分率 […のモルぶんりつ] /mole fraction of …/
…の質量分率 […のしつりょうぶんりつ] /mass fraction of …/
…の容積分率 […のようせきぶんりつ] /volume fraction of …/
…の重量モル濃度 […のじゅうりょうモルのうど] /molarity of …/
濃度 [のうど] /concentration/
平均自由行程 [へいきんじゆうこうてい] /mean free path/
行程 [こうてい] /distance, path length/
熱力学的温度 [ねつりきがくてきおんど] /thermodynamic temperature/
絶対温度 [ぜったいおんど] /absolute temperature/
セルシウス温度 [セルシウスおんど] /Celcius temperature scale/
気体定数 [きたいていすう] /gas constant, R/
ボルツマン定数 [ボルツマンていすう] /Boltzmann constant/
熱 [ねつ] /heat, Q/
仕事 [しごと] /work, W/
内部エネルギー [ないぶエネルギー] /internal energy/
エンタルピー [エンタルピー] /enthalpy/
エントロピー [エントロピー] /entropy/
熱容量 [ねつようりょう] /heat capacity/
比熱容量 [ひねつようりょう] /specific heat capacity/
熱伝導率 [ねつでんどうりつ] /thermal conductivity/
浸透圧 [しんとうあつ] /osmotic pressure/
…の化学ポテンシャル […のかがくポテンシャル] /chemical potential of …/
溶解 [ようかい] /dissolve, solution/
溶解性 [ようかいせい] /solubility/
溶解度積 [ようかいどせき] /solubility product, K = [Ag+][Cl-]/
化学平衡 [かがくへいこう] /chemical equilibrium/
溶液 [ようえき] /solution (usually aqueous)/
溶質 [ようしつ] /solute/
単位 [たんい] /unit of measurement/
平衡定数 [へいこうていすう] /equilibrium constant/
難溶性 [なんようせい] /insolubility/
適用 [てきよう] /apply (a concept or principle)/

You’ll probably notice that some of these terms towards the end are not units of measurement. They are other pertinent words that I put in on my own. I think it’s pretty interesting how some words like specific heat capacity have full kanji words, but others like entropy are just transliterated. I wonder what characters they use for entropy in, say, Chinese? Probably something like ‘chaos’.

This is pasted directly from the flashcard file, you can see that the encoding is pretty simple. The first is the word in kanji, then in brackets is its reading in kana, and then between slashes is it’s meaning in English. You can set the program to show which of the three you want so you can quiz on English, reading, or kanji. Also it has different modes for learning, quizzing, and reviewing the lessons. Quite a good program, in my opinion.

When I get a little farther, I’ll start posting these flashcard files on a seperate page. I don’t know if anyone will use them, but someone might (if they are able to even find it…)

« Previous PageNext Page »