I have two daughters, aged 8 and 10, and they love Star Wars. They both have Star Wars lightsabers, in the past they’ve had both Star Wars t-shirts and pajamas. They love the new Star Wars: Rebels TV show as it’s made for an audience of their age, and it was great to see how excited and invested they were when Kanan faced the Inquisitor in the season finale. We haven’t had a chance to see a lot of the Clone Wars TV show, but we’ve enjoyed watching some of those together as well.

Even with that, they had never seen any of the actual movies. I had been avoiding the movies for a long time, because I have mixed feelings about the films. I loved the original trilogy when it was in its original form, but like many fans I don’t think very highly of all the changes George Lucas made to the original trilogy (Han not shooting first, Jabba at the cantina, Hayden Christensen replacing Sebastian Shaw as Anakin, no more ‘yub yub’ song at the end or Return of the Jedi, etc.) In fact I hadn’t seen the original trilogy myself for over a decade.

And as for the prequels, the kindest word I can have for them is ‘sub-par’. Darth Maul was pretty cool, and… and that’s about it. I didn’t even see episodes 2 or 3 in the theater, instead seeing them later when they came out on VHS or DVD.

But recently I learned about the Despecialized Editions: basically fans going through the painstaking effort of reproducing the original trilogy, shot for shot, as it was in the original theatrical release. Getting hold of them is more than a little laborious, to say the least, but after doing so we watched all three of them during spring break a couple of weeks ago.

It was great.

My 8-year-old daughter had some trouble following some of the plot and dialogue, so I had to pause it periodically to answer her questions, but they both really enjoyed watching them. Some specific comments I recall are:

    “C-3PO doesn’t do anything but whine. He’s kind of like comic relief for the movie.”
    “Why is that old guy [Grand Moff Tarkin] ordering Darth Vader around? Isn’t Vader the main bad guy?”
    “It’s really dangerous to be an officer under Darth Vader. You get promoted quickly, but if you mess up you’re dead.”
    “Han Solo’s a really cool guy.” (this was my 10-year-old daughter, who’s just starting to notice boys a little bit)
    “Why did Jabba make Leia wear that outfit? It doesn’t make any sense.”

I was a little disappointed at the climax of Empire Strikes Back, because they already knew that Darth Vader was Luke’s father. That’s pretty much common knowledge now, so I wasn’t surprised at all that they knew.

However, the two of them were really vested in the love triangle between Han, Leia, and Luke in the first two films, *and they had no idea about the big reveal in Return of the Jedi*. After we finished the first two movies, that was all they could talk about:

    “Leia has kissed both Luke and Han, I wonder which one will she choose?”
    “I think she likes Han Solo more because she told him ‘I love you’, but now he’s frozen and Luke is with her.”

I sagely replied “we’ll just have to watch the next movie and see”.

So I will never forget the look of disbelief and shock on their faces when Luke is talking with the force spirit of Obi Wan in Return of the Jedi and it’s revealed that Leia is his sister. It was great, a true golden moment:


“That’s impossible!”

“But… they kissed!”

After they recovered from that, they had great fun in seeing Han’s consternation in dealing with Leia. Especially at the end they loved the look on his face when Leia tells him that Luke is her brother. Priceless.

So, now my daughters want to watch the rest of the Clone Wars TV show, and watch the prequels to be ready for Episode VII in December. I have the Clone Wars TV show, but not the prequel movies, so I’ll need to get hold of those next.

Fick’s Law is a principle of physics, and forms the basis and foundation of diffusion and more generally mass transfer. Diffusion is the action where molecules of dye in water spread out out from their source instead of staying concentrated where the source is, even if you don’t mix up the water at all. In layman’s terms I would define it like this: “molecules diffuse from areas of high consentration to areas of low concentration. They diffuse faster when the difference in concentration between the areas is larger, and the diffuse faster when the distance between the areas is smaller.”

A more accurate definition would be stated like this: “the diffusive flux is proportional to the difference in concentration, and inversely proportional to the distance.” First of all let’s look at the word ‘flux’, a very technical term. It’s basically a way of quantifying how much ‘stuff’ is moving at any particular point in space. It’s defined by the following:

 \large{ \left[ \text{flux of stuff} \right] = \large{ \frac{\left[ \text{stuff} \right]}{\left[ \text{area} \right] \left[ \text{time} \right] } } }

So basically flux means the rate of flow of stuff per unit area. The ‘stuff’ is for whatever moving quantity you are interested in. For example in SI units molar flux would be in units of mol/(m2·s), heat flux (or more generally energy flux) would be J/(m2·s) which is equivalent to W/m2, and mass flux would be kg/(m2·s). A bit more esoterically in fluid dynamics there is momentum flux, information theory has entropy flux, electromagnetism uses electric flux and magnetic flux, and quantum mechanics even has probability flux!

Anyway, in the definition for Fick’s Law the word ‘proportional’ means a linear relationship: if the difference in concentration doubles, then the flux doubles. If the difference in concentration reduces to one half, then the flux reduces to one half as well. ‘Inversely proportional’ means the opposite: if the distance doubles, then the flux halves. If the distance halves, then the flux doubles.

For example, look at the following simple example:
simple Fick's Law example
On the left we have some medium that contains substance A at concentration C_1 , separated by a membrane of thickness L from another medium that contains substance A at some lower concentration C_2 . Substance A then diffuses from the left through the membrane to the right.

From the definition of Fick’s Law we know that it’s the difference between the concentrations on the left and the right that’s important. We’ll define this as \Delta C = C_2 - C_1.

So now if we define the flux of A through the membrane as N_A , then we can say the following:

1. N_A is proportional to \Delta C, which is equivalent to
 N_A = \left[ \text{constant} \right] \times \Delta C

2. N_A is inversely proportional to L , which is equivalent to
 N_A = \left[ \text{constant} \right] \times \frac{ 1 }{ L }

3. We can then combine the two statements to say:
 N_A = \left[ \text{constant} \right] \times \frac{ \Delta C }{ L }
Basically we’re just combining the two constants into a single conglomerate constant. We can do this because while we have stated it’s a constant, we haven’t stated what value it must be, so we can combine multiple constants and call it a single constant.

So what do we do for this constant? The above equation tells us that so long as the material of the membrane separating the two remains the same and that it is substance A that is diffusing through it remains the same, \Delta C and L can change to practically any value and the value of the constant will not not change. So it becomes a property of the material that is being diffused through.

We call this constant the diffusivity, and use the notation D_{AB} , which denotes ‘diffusivity for substance A diffusing through substance B ‘.

So now we can write Fick’s Law as a general equation:

 \huge{ N_{A} = - D_{AB} \frac{\Delta C_A}{ L } }

First of all, what’s with the negative sign? This basically tells us that the species A if flowing from areas of high concentration to low concentration, assuming that D_{AB} > 0 (which it is by definition). Otherwise we would have things spontaneously flowing from low concentration to high concentration, which violates the 2nd law of thermodynamics.

What are the units on D_{AB} ? We know the units of everything else in the equation, so it’s fairly simple to work out. Units of the molar flux N_A is mol/(m2·s), concentration difference \Delta C is mol/m3, and distance L is of course simply m. So units for D_{AB} work out be m2/s.

Some typical values of D_{AB} are shown below:

\small{\begin{array}{ccc} \text{solute} \: A & \text{medium} \: B & D_{AB} \: [\text{m}^2 / \text{s}] \\ \hline \text{water vapor} & \text{air} & 2.6 \times 10^{-5} \\ \text{napthalene} & \text{air} & 6.1 \times 10^{-6} \\ \text{sodium chloride} & \text{water} & 1.2 \times 10^{-9} \\ \text{ethanol} & \text{water} & 7 \times 10^{-10} \\ \text{helium} & \text{pyrex} & 4.5 \times 10^{-15} \\ \text{cadmium} & \text{copper} & 2.7 \times 10^{-19} \\ \text{aluminum} & \text{copper} & 1.3 \times 10^{-34} \end{array}}

There are some simplifications I’ve made, for example diffusivity in gases generally varies linearly with pressure, so I showed the diffusivity in air at atmospheric pressure. Diffusivity in liquids can also vary with the overall concentration itself, so I reported typical average values. Also diffusivity in solids can vary greatly with temperature, so I reported values for 298 K or room temperature. But still you can usually make the following generalizations: diffusivity values in gases (at atmospheric pressure) are around 1 \times 10^{-5} [\text{m} ^2 / \text{s}] , diffusivity values in liquids are around 1 \times 10^{-9} [\text{m} ^2 / \text{s}] , and diffusivity values in solids vary widely, but are always much smaller than those in liquids and gases.

The only remaining thing we have is to reformulate the equation in a differential form. This means we take the limit at the thickness L goes to zero, the right-hand-side of the equation becomes a derivative:

 \huge{ N_{A} = - D_{AB} \frac{d C_A}{ d x } }

where now x is the coordinate in the same direction as L . This is the form for one direction in a Cartesian coordinate system. The more general form that is independent of coordinate systems is written as

 \huge{ N_{A} = - D_{AB} \nabla C_A }

where \nabla is the gradient operator. This is the form we’ll start with when we actually solve the problem of the bubble collapse, because the bubble is spherical so we’ll use spherical coordinates.

The other day I was playing with some soap bubbles and I started thinking: what makes them pop? And furthermore, sometimes they don’t pop, instead they will shrink down to nothing. So what causes each of these different behaviors? Can the failure mechanism be predicted? Can you predict when it will fail?

Basically the collapse of the bubble can be attributed to one of two mechanisms. 1.) The air inside of the bubble diffuses through the bubble film, causing the bubble to shrink and eventually collapse. 2.) The liquid in the bubble film evaporates, causing the film to get thinner and thinner until it finally ruptures, causing the bubble to pop.

First I’d like to analyze the bubble shrinking and collapsing, but I’ll split it into a series of posts. The first few will be brief explanations of specific pieces of physics that are needed in order to perform the analysis. The order I think I’ll do them in is the following:

1. Fick’s Law
2. Quasi-steady-state Approximation
2. Henry’s Law
3. Ideal Gas Law
4. Laplace Pressure
5. Geometry Simplification

Then I’ll show how to put all those pieces together to derive a single equation that shows how long the bubble will last until it collapses.

Or I might change my mind on the structure and the order part-way through. We’ll see.

I went to see the Tom Cruise movie today, Groundhog’s/Independence Day. I’m not a movie critic so I can’t give a nuanced critique on the movie, but I enjoyed it.

Everytime I see a Tom Cruise movie, I want to not like it simply because of how disturbing I find the whole Scientology thing. But darn it, he’s a good actor, and he does a great job in his movies. I never had any problem with the willful suspension of disbelief necessary to enjoy movies like this. Maybe I should check out Oblivion too.

I actually own the Japanese novel that the movie is based off of, interestingly titled “All You Need is Kill”. I bought it a few years ago before the movie had even been announced, it was recommended to me at the time. However I never got through it. My Japanese reading/writing is a lot weaker than my speaking/listening skills, and it’s really slow for me to read. And if I can’t read it fast enough to immerse myself and enjoy the story, it gets really hard for me continue. Maybe I should give it another try.

Still with much too much free time on my hands, I’ve started re-watching Highlander: The Series, which is conveniently on youtube in its entirety. Some thoughts so far in the first season (which originally showed back in 1992, over 20 years ago!):

The series seems to hold up pretty well overall. The acting is decent, and the casting is good as well. Englishman Adrian Paul is a million times more believable as a Scotsman than Christopher Lambert ever was. Alexandra Vandernoot playing Duncan’s love interest is as exotic and sexy as I remember her, however some parts of 90’s fashion haven’t aged well, which is most noticeable with her costumes: her high-waisted jeans look like mom jeans today, and she (and all the other women in the show) have big poofy hair that looks pretty silly to our current norms.

Production values are good, and it turns out that the show had quite a big budget for a TV show: it was financed and broadcast internationally, allowing it to make profit from numerous markets all over North America, Europe, and Asia simultaneously. With so many financial backers, I don’t know how the producers and directors managed to avoid having everything about the show ruined by a committee, but they seem to have managed it somehow.

Getting Christopher Lambert to do a cameo on the opening episode was excellent: their characters Connor and Duncan McLeod really have great chemistry together, it’s a shame that Lambert didn’t agree to do any more episodes.

Of course my memory tells me that the show really went downhill on the last two seasons, but I’m still on the first season and I’m really enjoying it.

Now is the time of year when my wife and kids go back to Japan for a major portion of the summer, so I find myself with lots of (i.e. too much) free time. While I always have goals of doing something productive, it usually ends up with me playing lots of video games and watching lots of shows or movies.

So I just finished watching a web series called Street Fighter: Assassin’s Fist. Basically it’s 12 10-minute episodes, so total it’s about the length of a full-length film. And I really enjoyed watching it. I’m not extremely knowledgeable about the back story and such of the Street Fighter characters just that Ryu and Ken both use a style called 暗殺拳 (ansatsu ken or assassin’s fist), and that they were both best friends and rivals. I also knew that there was a 3rd character that used the same style named Akuma (Japanese for demon), and that he was antagonist of some sort. That was about it.

Anyway, I was really impressed when I saw it. It wasn’t silly and campy like the old Street Fighter movie with Raul Julia. The dialog is mostly Japanese with subtitles, and in fact all of the Japanese characters are actual Japanese actors, with the sole exception of Ryu played by Asian American Mike Moh. Even then both him and Ken regularly speak Japanese in the show. Overall I’d say that it’s probably the best film based off a video game that I’ve ever seen. If you played Street Fighter back in your youth, I think you’ll like this.

In my last post, I explained the game theory mathematics of the standard game of Rock Paper Scissors (RPS), and what the Nash equilibrium is and how it works. Now I will show a similar analysis, but based on a variant of RPS where winning with rock gets you 1 point, winning with scissors gets you 2 points, and winning with paper gets you 3 points.

We can solve for the Nash equilibrium by doing a probability tree of all the different possible combinations, as before with the standard RPS. In instances where you win, add that many points. When you lose, subtract that many points (your opponent gaining points is equivalent to you losing points). As before, we will choose R, P, and S according to some random distribution, where R is our probability of choosing rock, P is our probability of choosing paper, and S is our probability of choosing scissor. Similarly we define the variables R', P', and S' to be the probabilities chosen by our opponent. Doing so, we get the following expected point gain each time the game is played:

g=0RR' - 3RP' + RS' + 3PR' + 0PP' - 2PS' - 1SR' + 2SP' + 0PP'

Remove the zero terms and then group them by R‘, P', and S':

(-3R + 2S)P' + (R-2P)S' + (3P-S)R'

For the equilibrium condition, we want the total to be 0, and we want it to be independent of whatever our opponent chooses for R', P', or S'. We do that by saying that each of the terms in the parenthesis must be equal to 0. This gives us 3 equations:

\begin{matrix}   -3R + 2S = 0 \\    R - 2P = 0 \\    3P - S = 0   \end{matrix}

Solve the 3 equations for the 3 unknowns, and you get R = \frac{1}{3}, S = \frac{1}{2}, and P = \frac{1}{6}. R+P+S = 1 as it should. You can substitute these into the original equation and prove to yourself that no matter what your opponent chooses for R', P', and S', the expected return is always 0.

So… if you’re one of the 3 people on the entire internet that is still reading this thing, you may be asking yourself, “what about when my opponent doesn’t choose the Nash equilibrium? Is there an optimum R, P, S that will maximize my expected score?”

It turns out there is, and the answer is a quite simple, though getting there takes a bit of algebra. Take the formula for the expected point gain as before (terms equal to 0 eliminated):

-3RP' + RS' + 3PR' - 2PS' - 1SR' + 2SP'

Use R+P+S=1 and R'+P'+S'=1 to eliminate S and S' so it’s only in terms of R, P, R', and P', and define this as a function g(R,P):

g(R,P) = -6RP' + 2P' + 6PR' - 2P + R - R'

Now we have a surface function of two variables, R & P, with R' and P' being constants. We can imagine this being in a 2D domain with R as the x-axis and P as the y-axis. So the domain in question will be a triangle with vertices at points (1,0), which corresponds to R=1 and P=S=0. Similarly, (0,1) corresponds to P=1 and R=S=0, and (0,0) corresponds to S=1 and R=P=0.

triangle plot #1

We could then take the calculus approach and find the local maxima of g(R,P), but we really don’t have to do that. Looking at g(R,P), it is linear w.r.t. both R and P, and so g(R,P) is a flat plane, tilted in some direction determined by R' and P'. Also we know from calculating the Nash equilibrium that g\left(\frac{1}{3},\frac{1}{6}\right)=0 (i.e. when we choose the Nash equilibrium that expected point gain g(R,P)=0 ), and that when R'=\frac{1}{3} and P'=\frac{1}{6} that the plane will be coplanar with the X-Y plane [i.e. when our opponent chooses the Nash equilibrium then g(R,P) will always be zero no matter what we choose, that is only possible when the flat plane is zero everywhere].

Any deviation from those values by R' and P' will cause the plane to tilt in some direction. Since it is now tilting, there will be a direction that increases the value of g(R,P), and since g(R,P) has a constant slope everywhere, the maximum within the triangle-shaped domain must be on one of the points (1,0), (0,1), or (0,0) [or perhaps two of the points will have the same maximum value].

So the optimal choice will either be R=1 (with P=S=0), P=1 (with R=S=0), or S=1 (with R=P=0). Simply evaluating the first equation above for these three choices and seeing which one maximizes the expected point gain will give you your choice.

So for example, here is the surface for g(R,P) we get when our opponent chooses rock, paper, and scissors with equal frequency, or (R',P',S')=\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right). In our equation S' was eliminated, so for calculating g(R,P) we just need to say that (R',P')=\left(\frac{1}{3},\frac{1}{3}\right):

surface map of g(R,P)

In this plot the gray triangle is the original domain in the previous image, the red triangle is the actual surface of g(R,P), and the black arrow is a normal vector from the plane coming from the Nash equilibrium point at g\left(\frac{1}{3},\frac{1}{6}\right)=0.

It appears that the red triangle is tilted along the R=\frac{1}{3} line, so that both (R,P)=(0,0) and (R,P)=(0,1), or in other words playing scissors every time or paper every time (or some mix of the two) will give you the maximum expected point gain of \frac{1}{3} every time that you play.

Now what if our opponent chooses a strategy that is close to but not quite the Nash equilibrium? How much of a point gain can we expect by using the optimal strategy?

Here is the surface for g(R,P) if our opponent chooses (R',P')=\left(\frac{1}{3}+0.01,\frac{1}{6}-0.01\right):

g(R,P) plot 2

The slope of the response surface is much less, as reflected by the fact that our opponent’s choice of R,P, and S is much closer to the Nash equilibrium. In this case, it turns out that choosing either rock or paper every time will give the optimal point gain, in this case an expected point gain of only 0.03 each time we play.

Of course choosing rock or paper every time would be something that our opponent, if not a mindless robot, would be able to easily capitalize on. What if instead we only slightly perturb our choice of (R,P) away from the Nash equilibrium like our opponent did, but instead perturb it in the direction that gives us the maximum expected point gain?

The distance we move from (R,P)=\left(\frac{1}{3},\frac{1}{6}\right) is the same as our opponent. He moved 0.01 in the R-direction, and -0.01 in the P-direction, for a total distance of \frac{\sqrt{2}}{100}. We know that playing rock or paper every time will maximize our expected score, this corresponds to the line P+R=1. The shortest path to this line is to move perpendicular to it, which is the vector direction (1,1). So this means we should choose the point (R,P)=\left(\frac{1}{3}+0.01,\frac{1}{6}+0.01\right). If we do so, our expected point gain each time we play the game is g(R,P)=0.0012.

My next question is then, how many games do we have to play in order to have, say, a 95% confidence that our score will be greater than our opponent? Remember that what we calculated above is the expected gain, which is kind of like taking a series of many, many games and taking the average. The outcome of any 1 game is random, but after many games we expect to pull ahead slightly. However I’m really rusty on this kind of math and I don’t remember how to do it. I’ll have to review a bit.

Everyone knows the age-old game of rock, paper, and scissors, it’s been around pretty much the whole world for a century or so, and originated in China ~2000 years ago.

Generally the game is approached in 1 of 2 ways: the first is where you try and out-think your opponent, trying to exploit the inherent non-randomness of people when they play the game. This is essentially applied psychology.

The other way to approach the game is where you assume that you and your opponent are totally rational, i.e. both of you know everything about the game and there is no ‘human’ element to exploit. This is essentially game theory.

In game theory, two players are perfectly matched: neither one has any particular advantage over the other. Both players have the same chance to win each game. So the best strategy is to randomly choose rock, paper, or scissors each time (with an equal probability of each).

Using game theory and probability, you can actually prove this. Here’s one way it can be done. Define the following: if you win a game, you get 1 point. If you lose a game, you lose 1 point. R, P, and S are the probabilities of you choosing rock, paper, and scissors respectively. Similarly R', P', and S' are the probabilities your opponent chooses. Obviously R+P+S=1, R'+P'+S'=1, and 0\le R, P, S, R', P', S' \le 1.

Now we calculate the expected point gain per game by taking every possible game outcome, multiplying the point gain/loss of that particular outcome times the probability of it occurring, and then summing them all together. So the points won per game is the following:


Since we know that the best choice is to choose R, P, and S with equal probability, then R=P=S=\frac{1}{3}. Since our opponent is also perfectly rational, he will also choose the same probabilities. So the points won per game is:


Everything sums up to 0, which is what you expect when the game is evenly matched. However, this choice of (R,P,S)=\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right) lends itself to a very interesting consequence. Let’s say your opponent chooses some (R',P',S')\neq \left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right). What happens then? Let’s say he’s really stupid and chooses rock every time, or (R',P',S')=\left(1,0,0\right). Of course the obvious choice for us to win would be to choose paper every time, or (R,P,S)=\left(0,1,0\right). But if we don’t change our strategy, the following happens:


Of course, this is obvious without having to calculate it. If your opponent chooses rock every time, and you are choosing each one 1/3 of the time, then 1/3 of the time you’ll tie with rock, 1/3 of the time win with paper, and 1/3 of the time lose with scissors. So let’s see what happens when we choose (R,P,S)=\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right), but R', P', and S' are still unknown:


Rearrange the terms a bit:


So we see that for choosing (R,P,S)=\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right), it doesn’t matter what kind of strategy or ratio that our opponent chooses: the expected point gain will always be zero and we will always be evenly matched with our opponent.

This is what’s called the Nash equilibrium, named after John Nash, the subject of A Beautiful Mind. As I best understand it, for a zero-sum game (of which RPS certainly is), there always exists a state where one player can force the game to remain in equilibrium, no matter what the other player does. For the standard RPS, this comes out to be (R,P,S)=\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right). If you choose to play with this distribution, you and your opponent will always be in equilibrium no matter what strategy your opponent tries.

So that brings us to variants of RPS. This whole line of inquiry was inspired by a homework problem that my younger daughter had a few weeks ago. Basically it had a variant of RPS where if you win with rock you get 1 point, win with scissors you get 2 points, and win with paper you get 3 points. Since my daughter is in 1st grade the questions were very easy, i.e. “if you have 10 points and win with scissors, how man points do you have?” etc. But this got me thinking about how this game would actually work. Of course you want to win with paper, but you run into the same problem as regular RPS: you and your opponent are both trying to out-guess each other, so there’s no simple strategy that’s guaranteed to win.

So the next question is, is there a Nash equilibrium for this version of RPS, and if so what is it?

This post is already too long, so I’ll show how to find the Nash equilibrium for this version in my next post.

Back in our younger days, my younger brother and I were quite the aficionados for low-budget martial arts movies. We’re not talking Jackie Chan movies here, those movies are grade-A top quality compared to some of the stuff we regularly watched.

For example, I’ve seen pretty much everything by JKVD made before 2000, almost everything that Cynthia Rothrock has been in, all 3 of the Best of the Best movies (the first is the best of the Best of the Best, though #2 is more entertaining imho), all of the Master Ninja series before I had even heard of MST3K (same goes for Quest of the Delta Knights!), tons of Godfrey Ho Hong Kong drek that was made by editing together unfinished parts of movies (Ninja Death Squad being my favorite example. I could only find that one scene online, but it’s a very representative example), and everything that Billy Blanks did long before he did Tae Bo (which seems to be pretty much dead now).

And speaking of Billy Blanks, probably my favorite fight scene from a martial arts movie is from one of his movies, Showdown from 1993. It’s pretty much a remake of Karate Kid, with Billy in the Mr. Miyagi role and more-or-less unknown Kenn Scott in the Daniel role (seriously, the only mainstream thing he’s done was the suit actor for Raphael in the old TMNT movies).

Take a look at all these photos of city streets in the rain. I found them all on google image search and I am shamelessly hotlinking them:

1 2 3 4 5 6 7 8 9

I was driving home from work today in the rain, and saw many similar scenes. However I noticed something interesting that I have never noticed before. The street lights, car lights, traffic lights, etc. all reflect off of the wet surface of the road. However since the road is not a mirror-flat surface, the reflection is smeared out or diffused due to the rough texture of the road. So my question is this: why is the smearing or diffusion of the reflection almost entirely in the vertical direction? Certainly the surface of the asphalt or concrete does not have a sufficiently heterogeneous asperity (i.e. the roughness is uniform from all directions) to account for this. Shouldn’t the light spread out as far horizontally as it does vertically? Any insights here?

Next Page »