This week’s Fiddler is a challenging probability question.

You’re playing a game of pinball that includes four lanes, each of which is initially unlit. Every time you flip the pinball, it passes through exactly one of the four lanes (chosen at random) and toggles that lane’s state. So if that lane is unlit, it becomes lit after the ball passes through. But if the lane is lit, it becomes unlit after the ball passes through. On average, how many times will you have to flip the pinball until all four lanes are lit?

Extra credit:Instead of four lanes, now suppose your pinball game has $n$ lanes. And let’s say that $E(n)$ represents the average number of pinball flips it takes until all $n$ lanes are lit up. Now, each time you increase the number of lanes by one, you find that it takes you approximately twice as long to light up all the lanes. In other words, $E(n+1)$ seems to be about double $E(n)$. But upon closer examination, you find that it’s not quite double. Moreover, there’s a particular value of $n$ where the ratio $E(n+1)/E(n)$ is at a minimum. What is this value of $n$?

My solution:

[Show Solution]

### Qualitative behavior and derivation of $E(n)$

Let’s fix $n$ for now, and define $T_k$ to be the number of expected additional flips required to light up all lanes given that there are currently $k$ lanes lit up. Our goal is to find $E(n) = T_0$. Two things can happen every time we flip the pinball:

- We pass through a lit lane, which causes it to become unlit. This happens with probability $\tfrac{k}{n}$, leaving $k-1$ lit lanes.
- We pass through an unlit lane, which causes it to light up. This happens with probability $\tfrac{n-k}{n}$, leaving $k+1$ lit lanes.

The boundary condition is that $T_n=0$, because if all lanes are lit, we have accomplished the goal and there are no more flips required. Putting everything together, we obtain the following recursion for $T_k$:

\begin{align}

T_0 &= T_1 + 1 \\

T_k &= \tfrac{k}{n}T_{k-1} + \tfrac{n-k}{n}T_{k+1} + 1 && \text{for }k=1,\dots,n-1.\\

T_n &= 0

\end{align}This is a set of $n+1$ linear equations in the $n+1$ unknowns $\{T_0,\dots,T_n\}$, so we have enough information to solve the problem and find what we’re after, which is $T_0 = E(n)$.

If you’re thinking “hey wait a minute… isn’t this just MFPT for an Ehrenfest Markov chain?”, I got you! I wanted to solve the problem without bringing in any heavy machinery first. I added a section at the end of this post that explains a bunch of connections to Markov chains.

Ok, back to our system of equations. Let’s start by making the substitution

\[

\Delta_k := T_{k-1}-T_k\qquad\text{for }k=1,\dots,n.

\]Since $T_n=0$, we can write $T_0$ in terms of the $\Delta_k$’s as:

\[

E(n) = T_0 = \Delta_1 + \Delta_2 + \cdots + \Delta_n

\]Meanwhile, we can rewrite the recursion in terms of the $\Delta_k$’s as:

\begin{align}

&& T_k &= \tfrac{k}{n}T_{k-1} + \tfrac{n-k}{n}T_{k+1} + 1\\

\iff && T_k-T_{k+1} &= \tfrac{k}{n} \left( T_{k-1}-T_{k+1} \right) + 1 \\

\iff && \Delta_{k+1} &= \tfrac{k}{n}\left( \Delta_{k}+\Delta_{k+1} \right) + 1 \\

\iff && \left(1-\tfrac{k}{n}\right)\Delta_{k+1} &= \tfrac{k}{n}\Delta_k + 1 \\

\iff && \Delta_{k+1} &= \tfrac{k}{n-k}\Delta_k + \tfrac{n}{n-k}

\end{align}So our simplified recursion looks like:

\begin{align}

\Delta_1 &= 1 \\

\Delta_{k+1} &= \tfrac{k}{n-k}\Delta_k + \tfrac{n}{n-k} && \text{for }k=1,\dots,n-1

\end{align}We can now solve for each $\Delta_k$ by forward substitution. The first few terms look like (rearranged to highlight the pattern):

\begin{align}

\Delta_1 &= \tfrac{n}{n} \\

\Delta_2 &= \tfrac{n}{(n-1)n} + \tfrac{n}{n-1} \\

\Delta_3 &= \tfrac{2\cdot 1 n}{(n-2)(n-1)n} + \tfrac{2n}{(n-2)(n-1)} + \tfrac{n}{n-2} \\

\Delta_4 &= \tfrac{3\cdot 2 \cdot 1n}{(n-3)(n-2)(n-1)n} + \tfrac{3\cdot 2 n}{(n-3)(n-2)(n-1)} + \tfrac{3n}{(n-3)(n-2)} + \tfrac{n}{n-3}

\end{align}The pattern is clear, and we have the following formula in terms of factorials, which we can rewrite in terms of binomial coefficients:

\begin{align}

T_k &= \sum_{i=0}^{k-1} \frac{n(k-1)!/i!}{(n-i)!/(n-k)!} = \sum_{i=0}^{k-1} \frac{\binom{n}{i}}{\binom{n-1}{k-1}}\\

%&= \sum_{i=0}^{k-1} \frac{n(k-1)! (n-k)!}{(n-i)!i!} \\

%&= \sum_{i=0}^{k-1} \frac{n! (k-1)! (n-k)!}{(n-1)!(n-i)!i!} \\

\end{align}Since $E(n)$ is the sum of the $\Delta_k$’s, we can now obtain a compact formula for the average number of pinball flips it takes until all $n$ lanes are lit up.

$\displaystyle

E(n) = \sum_{k=1}^n \sum_{i=0}^{k-1} \frac{\binom{n}{i}}{\binom{n-1}{k-1}}

$

Let’s try to make some sense of this complicated formula. We can manipulate it a bit to first obtain:

\[

E(n) = 2^n \sum_{k=1}^n \underbrace{\frac{1}{\binom{n-1}{k-1}}}_{\alpha_k} \,\underbrace{\sum_{i=0}^{k-1} \frac{1}{2^n}\binom{n}{i} }_{\Phi_k}

\]So we can think of $E(n)$ as a dot product between two sequences $\{\alpha_k\}$ and $\{\Phi_k\}$, scaled by $2^n$. But what do these sequences represent?

Recall that the binomial distribution with $p=1/2$ tends to a normal distribution with mean $n/2$ and variance $n/4$ as $n$ gets large, so we have:

\[

\frac{1}{2^n} \binom{n}{i} \approx \frac{1}{\sqrt{\pi n/2}}\exp\left( -\frac{(i-n/2)^2}{n/2} \right)

\]Therefore, the term $\Phi_k$ is approximately the cumulative distribution function for a normal distribution. Meanwhile, the weights $\alpha_k$ are weights that start at $1$ when $k=1$, reach a minimum when $k=\tfrac{n+1}{2}$, and then increase back to $1$ as $k$ approaches $n-1$. Here is what these sequences look like when $n=\{8,20,50\}$:

As we can see, the weights get steeper and steeper. The last two weights are

\[

\alpha_{n-2} = \binom{n-1}{n-2}^{-1} = \frac{1}{n-1}\qquad\text{and}\qquad

\alpha_{n-1} = \binom{n-1}{n-1}^{-1} = 1

\]Therefore, as $n$ gets large, we have

\[

E(n) \approx 2^n\left(1+\frac{1}{n-1}\right) = 2^n \frac{n}{n-1}

\]

This explains why we can expect that

\[

\lim_{n\to\infty} \frac{E(n+1)}{E(n)} = 2

\]

Unfortunately, I don’t see any way to make meaningful simplifications to the expression for $E(n)$, so we’ll have to proceed numerically to answer the questions asked.

### The answer

The first problem asks us to calculate $E(4)$, which is easy enough:

$\displaystyle

E(4) = \frac{64}{3} \approx 21.3333

$

We can confirm the answer is correct via Monte Carlo simulation, and here is the histogram we obtain after $10^7$ trials.

As we can see, the empirical average of $21.3291$ is in good agreement with the true value of $21.3333$ we computed. The histogram only has bars at even values $n$ since it is impossible to go from $0$ to $4$ in an odd number of steps (at every step, we toggle the parity, so since we want to go from an even number to an even number, we must move an even number of times).

### Extra credit

We can plot the function $\frac{E(n+1)}{E(n)}$ as a function of $n$, and we obtain:

Sure enough, the values are close to $2$ (I started the plot at $n=4$ to see more detail), and the minimum value occurs at $n=6$. This minimum value is:

$\displaystyle

\frac{E(7)}{E(6)} = \frac{151}{78} \approx 1.9359

$

The solution method above was relatively straightforward, but it turns out this problem has a Markov chain interpretation, and it’s actually a well-studied type of model (enough that it has a name!). If you’re interested, read on…

### Markov chain interpretation

We can also interpret this problem as a Markov chain. Here, the state $k\in\{0,1,\dots,n\}$ represents having $k$ lanes lit up. The transition probabilities are represented below:

and the associated transition matrix is:

\[

P = \begin{bmatrix}

0 & 1 & 0 & 0 & \cdots & 0 & 0 \\

\tfrac{1}{n} & 0 & \tfrac{n-1}{n} & 0 & \cdots & 0 & 0 \\

0 & \tfrac{2}{n} & 0 & \tfrac{n-2}{n} & \ddots & \vdots & \vdots \\

0 & 0 & \tfrac{3}{n} & 0 & \ddots & 0 & 0 \\

\vdots & \vdots & \ddots & \ddots & \ddots &\tfrac{2}{n} & 0 \\

0 & 0 & \cdots & 0 & \tfrac{n-1}{n} & 0 & \tfrac{1}{n} \\

0 & 0 & \cdots & 0 & 0 & 1 & 0

\end{bmatrix}

\]This particular transition matrix has a name! It’s called the Ehrenfest Model, and it is used to model diffusion, among other things. The standard setup is that you have two urns, and a total of $n$ marbles. Each marble has a probability of $1/2$ of spontaneously jumping from one urn to the other. In our problem, all $n$ marbles start in the first urn, and we are asked to find the expected number of jumps required before all marbles first simultaneously end up in the other urn.

The expected number of steps required to first reach a given state when starting at another given state in a Markov chain is called the Mean First Passage Time (MFPT). It is also sometimes called the First Hitting Time. The way to calculate the MFPT is to solve the system of equations

\[

\hat{P}x + \mathbf{1} = x,

\]where $\mathbf{1}$ is the vector of 1’s, and $\hat P$ is the transition matrix with rows and columns corresponding to the target set removed. In our case, the target set is the last state, so we should remove the last row and column. Then, $x$ will be the vector of MFPT’s from each of the other states. In our case, we are interested in the first component of $x$. The system of equations above is precisely the one we derived earlier, so nothing new here; just another way to get the same answer!

Thanks!

Always enjoying reading your explanations, even when I’m able to solve myself. The analogies and related interesting info is especially interesting.