The Riddler post for today is about a bar game where you flip a coin and move forward or backward depending on the result. Here is the problem:
Consider a hot new bar game. It’s played with a coin, between you and a friend, on a number line stretching from negative infinity to positive infinity. (It’s a very, very long bar.) You are assigned a winning number, the negative integer -X, and your friend is assigned his own winning number, a positive integer, +Y. A marker is placed at zero on the number line. Then the coin is repeatedly flipped. Every time the coin lands heads, the marker is moved one integer in a positive direction. Every time the coin lands tails, the marker moves one integer in a negative direction. You win if the coin reaches -X first, while your friend wins if the coin reaches +Y first. (Winner keeps the coin.)
How long can you expect to sit, flipping a coin, at the bar? Put another way, what is the expected number of coin flips in a complete game?
Here is my solution:
[Show Solution]
This problem is famously known as Gambler’s ruin and it’s an example of a one-dimensional random walk. In the typical Gambler’s ruin setup, you flip a coin repeatedly and either gain \$1 or lose \$1 depending on the outcome. You have a fixed starting budget and a desired goal amount. You keep playing until either you have reached your goal amount, or you run out of money. The bar game we’re looking at here is precisely Gambler’s ruin, but shifted so that you start with \$0 and end when you’re either up \$Y or you’re \$X in debt.
Here is one possible solution. Let’s define $E_n$ to be the expected number of coin flips in a complete game assuming the marker starts at location $n$ on the number line. We would like to find $E_0$. Suppose that at each flip, we move forward with probability $p$ and backward with probability $q$ (where $p+q=1$). If we are currently at $n$, then with probability $p$, we came from $n-1$ and with probability $q$ we came from $n+1$. Either way, we incurred one additional flip. We can therefore write the following recurrence relations for the expected number of coin flips:
\begin{align}
E_{-X} &= 0 \\
E_n &= p E_{n-1} + q E_{n+1} + 1 \qquad\text{for }-X < n < Y\\
E_Y &= 0
\end{align}
In matrix form, these equations look like:
\[
\begin{bmatrix}
1 & -q & 0 & \dots & 0 \\
-p & 1 & -q & \ddots & \vdots \\
0 & -p & 1 & \ddots & 0 \\
\vdots & \ddots & \ddots & \ddots & -q \\
0 & \dots & 0 & -p & 1
\end{bmatrix}
\begin{bmatrix}
E_{-X+1} \\ E_{-X+2} \\ \vdots \\ E_{Y-2} \\ E_{Y-1}
\end{bmatrix}
=
\begin{bmatrix}
1 \\ 1 \\ \vdots \\ 1 \\ 1
\end{bmatrix}
\]
This system of equations has a nice structure; the matrix on the left is both tridiagonal and Toeplitz. One can solve this system of equations using a specialized form of Gaussian elimination, for example.
Although the equations have a closed-form solution, it is pretty messy. Instead, I’ll show how you can derive the answer by hand in the case $p=q=\tfrac{1}{2}$. The last equation can be solved for $E_{Y-1}$ in terms of $E_{Y-2}$:
\[
E_{Y-1} = \tfrac{1}{2} E_{Y-2} + 1
\]
We can then substitute this into the second equation and solve for $E_{Y-2}$ in terms of $E_{Y-3}$:
\[
E_{Y-2} = \tfrac{2}{3} E_{Y-3} + 2
\]
Substitute into the third equation and solve for $E_{Y-3}$ in terms of $E_{Y-4}$:
\[
E_{Y-3} = \tfrac{3}{4} E_{Y-4} + 3
\]
The pattern is now clear (you can prove it using induction if you like). Continuing in this fashion until we get to the starting point,
\[
E_{1} = (1- \tfrac{1}{Y}) E_{0} + Y – 1
\]
We can do something similar but starting from the other end. e.g. solving for $E_{-X+1}$ in terms of $E_{-X+2}$, and then $E_{-X+2}$ in terms of $E_{-X+3}$, and so on. This time, we obtain the result:
\[
E_{-1} = (1- \tfrac{1}{X}) E_0 + X – 1
\]
We can now combine the fruits of our labor with the only equation we haven’t used yet, which relates $E_0$, $E_1$, and $E_{-1}$. The result is that:
\begin{align}
E_0 &= \tfrac{1}{2} E_{-1} + \tfrac{1}{2} E_1 + 1 \\
&= \tfrac{1}{2}\left( (1- \tfrac{1}{X}) E_0 + X – 1 \right) + \tfrac{1}{2} \left( (1 – \tfrac{1}{Y}) E_0 + Y – 1 \right) + 1\\
&= \tfrac{1}{2}\left( 2- \tfrac{1}{X} – \tfrac{1}{Y} \right) E_0 + \tfrac{1}{2}(Y + X)
\end{align}
Rearranging and solving for $E_0$, we obtain:
$\displaystyle
E_0 = \frac{ Y + X }{ \tfrac{1}{X} + \tfrac{1}{Y} } = XY
$
A similar approach can be used to compute the probability that each player will win. To do this, let $P^Y_n$ be the probability that we’ll reach $Y$ first given that we are currently at $n$, and similarly define $P^X_n$. Then the recursions look like:
\begin{align}
P^Y_{-X} &= 0 \\
P^Y_n &= p P^Y_{n-1} + q P^Y_{n+1} \qquad\text{for }-X < n < Y\\
P^Y_Y &= 1
\end{align}
Written as matrices, the only difference between this problem and that of finding expectation is the right-hand side of the equation! Using a similar recursive approach, we can solve the equations and we obtain:
$\displaystyle
P_0^X = \frac{Y}{X+Y} \qquad\text{and}\qquad P_0^Y = \frac{X}{X+Y}
$
So, practically speaking, if we must travel twice as far on the number line as our opponent, we are half as likely to win the game. Just for fun, I made some plots that show how the expected value and the probability of winning change if you’re not using a fair coin. The plots show the case where $X+Y = 50$. For example, the 40% curve means that the coin flips in your favor 40% of the time.


If you’re interested in a derivation of the solution for the case $p\ne \tfrac{1}{2}$, there is a well-written set of notes here.
and here is a much slicker solution, courtesy of Daniel Ross:
[Show Solution]
Define the random variables $\{Z_i\}$ and $S_k$ as:
\[
Z_i = \begin{cases} 1 & \text{the }i^\text{th}\text{ flip is }H \\
-1 & \text{the }i^\text{th}\text{ flip is }T
\end{cases}
\qquad\text{and}\qquad S_k = \sum_{i = 1}^k Z_i
\]
Also, let $T$ be the random variable corresponding to the duration of the game. Therefore, $S_T$ is a random variable whose only possible values are $-X$ with probability $r$, or $Y$ with probability $(1-r)$.
Wald’s equation and Wald’s second equation (sometimes also called the Blackwell-Girshick equation) give expressions for the mean and variance of random-length sums of identically distributed random variables:
\begin{align}
\mathbb{E}(T) \mathbb{E}(Z_1) &= \mathbb{E}(S_T)\\
\mathbb{E}(T) \mathrm{Var}(Z_1) &= \mathbb{E}\left( (S_T− \mathbb{E}(Z_1) T )^2 \right)
\end{align}
Since $\mathbb{E}(Z_1) = 0$, the first equation simplifies to $0 = -rX + (1-r)Y$. Solving for $r$, we obtain
\[
r = \mathbb{P}(\text{game ends at } – X) = \frac{Y}{X+Y}
\]
Since $\mathrm{Var}(Z_1) = 1$, the second equation is $\mathbb{E}(T) = rX^2 + (1-r)Y^2$. Substituting the value we found for $r$, we obtain
\[
\mathbb{E}(T) = \frac{Y}{X+Y} X^2 + \frac{X}{X+Y} Y^2 = XY
\]
and that’s it!
Nice post Laurent – although as a committed maths nerd, I was sorry to see the general case of the biased coin relegated to a link! ; )
If you’re interested, I’ve written up some thoughts on the solution and another Riddler extension on my own blog here: theexcelements.com/
Very nice solution! I love generating functions!