# About Gambler’s Ruin

In the game of Gambler’s Ruin, one player, whom we shall call X, plays against the House — a casino with unlimited resources. X begins with an initial stash of money, say $5. Let’s call that amount *i — i* for initial. At each round of play, a coin is flipped. If it comes up heads, player X receives $1 from the House. If it comes up tails, X gives $1 from his stash to the house. X hopes to play until his stash has grown to $N. The game is usually played with a fair coin, where the probability of heads is *p = 0.5* and the probability of tails is *q = 1-p = 0.5*. It is interesting, however, to study the game for any value of *p*, where *0 < p < 1*. This variant has applications to fields other than gambling, e.g., insurance and stock trading. Below we discuss the theory behind Gambler’s Ruin, then take up its applications.

To get a feel for the game, you can try the Demo App, which is pictured above. The app is written in Elm, a language of pure functions — my favorite language. It is a language well-suited to making small interactive simulations See the article on Schelling’s segregation model for another interactive simulation.

## Random Walks

The Gambler’s Ruin problem is a special case of the phenomenon of *random walk*. The terminology comes from the following thought experiment. Consider a line extending infinitely far to the left and right with marks … -2, -1, 0, 1, 2, … made at equal intervals. A particle — an atom, a grain of pollen, a soccer ball, or a person — is placed on the line at position *i.* At each tick of the clock, the particles moves to the right with probability *p* or to the left with probability *q = 1-p*. The particle “walks” the line, but in a drunken or random fashion, hence the name. One can ask various questions, such as “what is the probability that the particle gets to location *N > 0* before it gets to location *0?” *This* *question is equivalent to the question* *“A gambler starts with an initial stake of $i and will play as above until he goes broke or wins $N. What is the probability that he wins $N?”

Random walks can be carried out in one dimension, as described above, or in two dimensions or more. In two dimensions, random walk models Brownian motion. This was a phenomenon discovered by the British biologist George Brown in 1827. Brown studied pollen grains suspending in water using a microscope. He saw that the grains danced about, jiggling to and fro. For many years, the cause of this motion remained a mystery. It turns out that it is due to the random bombardment of the tiny pollen grains by water molecules. The first satisfactory (and mathematical explanation) was given in 1905 by Albert Einstein. Random walk is also a model of diffusion on the molecular level.

## Back to Gambler’s Ruin

Reference (1) gives a (mathy but very nice) answer to the question just posed. We will follow the exposition there, but without the derivation of the formulas. In the case *p = q = 0.5* of a “fair game,” the probability of winning is

` (1) P = i/N`

and the probability of ruin (going broke) is *Q = 1-P*. In the case of a player who starts with $5 and will play until he gains $40, one has *P = 1/8 = 0.125. *The probability of ruin is *Q = 7/8 = 0.875*. Note that if *N* is very large, then *P* is very small, and so *Q* is nearly equal to *1*. In practical term, this means that if player X is very greedy, his eventual ruin is almost certain.

If p is different from 0.5, the formula is a bit more complicated:

Let r = q/p where q = (1-p)/p. Then

the probability of winning is (2) P = (1 - r^i)/(1 - r^N) and the probability of ruin i (3) Q = 1 - P

In the image of the app above, we have taken *p = 0.501*, so player X has a very slight advantage. We find that *P = 0.1339*. The odds of winning have increased slightly, and the odds of going broke have decreased correspondingly. If we set *p = 0.51*, the change is significantly larger: *P = 0.2271*. Much better! Setting *p = 0.6*, we find a dramatic increase: *P = 0.8683*. In the case *p > 0.5*, the relation between *P* and *p* is very much non-linear, just as the formula says.

## Application 1: Insurance

Supposed that an insurance company starts with a reserve of *i* units of money. The unit is arbitrary, but we will take it to be $100,000, and we will take *i = 100*, so the initial reserve amount is $10 million. Each day one of two thing happens. With probability *p*, one unit of cash is added to the reserves. With probability *q =1-p*, the insurance company suffers a loss of 2 units, so the net result that day is that the reserves decrease by 1 unit.

To summarize, each day the reserves either increase by one unit or decrease by one unit, with probabilities *p* and *q*, respectively. Now the question: What is the probability that the company is ruined, i.e., eventually goes broke? This probability is the limit of the probability of ruin Q where *N* goes to infinity.

To compute this probability, assume that *p > q*, so that *r < 1*. This is the case where the odds are in favor of the company: the probability of a daily gain in the reserves is greater than the probability of loss. If *r < 1*, then the limit of the quantity `1 — r^n`

is 1. Therefore, in the limit, `P = 1 — r^i`

and so

` (4) Q = r^i`

Remember that `r = q/p < 1`

, so that the probability of ruin *Q* decreases as the initial reserve amount increases. This reasonable, and (4) gives the specific relation between the initial reserve amount and the probability of ruin.

Time for an example. Our insurance company has a reserve of 100 units — $10 million. Let’s suppose that *p = 0.51*. Then *r = 0.9608*, and one finds that

` (5) Q(100) = 0.9608^100 = 0.018`

The probability of ruin is 1.8%. This is a bit too high. The company actuary suggests that the reserves be doubled, so that

` (6) Q(200) = 0.9608^200 = 0.00034`

The probability of ruin is now 0.034%. An acceptable level of risk.

The kind of analysis just given lies the heart of what actuaries do in setting the conditions of insurance policies.

## Application 2: Trading Stocks

A similar analysis applies to certain types of stock trades. We recall the example of Ellen in section 1.3 of reference (1). Ellen buys a stock trading at $10 per share. The price moves as in a random walk with *p = 0.55*. That each day the price moves up by one dollar with probability *0.55*, and it moves down with probability *0.45*. Ellen wants to know what is the probability of doubling her investment by selling at $15 per share. She has put in a stop-loss order at $5 per share, so the worst she can do is lose one half of her investment. This is the Gambler’s Ruin problem with *i = 5* and *N = 10*. Using formula (2), we find that

r = q/p = 0.8182 P = (1 - r^5)/(1 - r^10) = 0.7318

Not too bad! Ellen is a risk-taker, so she carries out her strategy.