A coin-flipping game

This Riddler puzzle involves a particular coin-flipping game. Here is the problem:

I flip a coin. If it’s heads, I’ve won the game. If it’s tails, then I have to flip again, now needing to get two heads in a row to win. If, on my second toss, I get another tails instead of a heads, then I now need three heads in a row to win. If, instead, I get a heads on my second toss (having flipped a tails on the first toss) then I still need to get a second heads to have two heads in a row and win, but if my next toss is a tails (having thus tossed tails-heads-tails), I now need to flip three heads in a row to win, and so on. The more tails you’ve tossed, the more heads in a row you’ll need to win this game.

I may flip a potentially infinite number of times, always needing to flip a series of N heads in a row to win, where N is T + 1 and T is the number of cumulative tails tossed. I win when I flip the required number of heads in a row.

What are my chances of winning this game? (A computer program could calculate the probability to any degree of precision, but is there a more elegant mathematical expression for the probability of winning?)

Here is my solution:
[Show Solution]

5 thoughts on “A coin-flipping game”

  1. Very elegant! I’m sure I’m not the only one who tried to solve the problem by calculating the probability, P(N), of winning the game after N flips, and then summing P(N) from N=1 to infinity. But I couldn’t come up with an analytic expression of P(N). I wish I had thought of your approach.

    1. I solved it your way, but with N = the number of tails flipped so far. Then
      P(N) = 2^(-N-1) Product_{k=0}^{N-1} [1-2^(-k-1)].
      The kth factor in the product is the probability of NOT winning after the kth tail and before the (k+1)th, and the prefactor is the probability of winning after the Nth tail and before the (N+1)th. Numerically, Sum_{N=0}^infinity P(N) equals Laurent’s number. I presume this can be shown analytically, but I have not done so.

      1. Right, my mistake was that I set N to be the number of flips instead of the number of tails so far.

  2. This is actually a pretty standard chain for recurrent events. The (row stochastic) transition matrix for this countable state markov chain is:

    $P = \begin{bmatrix}
    p & (1-p) & 0 & 0 & 0 & 0 & \dots \\
    p^2 & 0 & (1-p^2) &0 & 0 & 0 & \dots\\
    p^3 & 0 & 0 & (1-p^3) & 0& 0& \dots \\
    p^4 & 0 & 0 & 0 & (1-p^4)& 0 & \dots \\
    p^5 & 0 & 0 & 0 & 0& (1-p^5)&\dots\\
    \vdots & \vdots & \vdots & \vdots & \vdots &\vdots & \ddots
    \end{bmatrix}$

    It’s fairly immediate that if we start at the top left (State 1), the only way to avoid returning to state 1 (i.e. avoiding a renewal) is to hit each and every item above the diagonal.

    There’s a standard analytic fact that $(1-p)(1-p^2)(1-p^3)(1-p^4)…$ converges if and only if $p + p^2 + p^3 + p^4 +…$ converges, which of course it does for any $p\in (0,1)$ — it equals $\frac{p}{1-p}$, hence the chain is transient.

    Too bad we can’t get $\sum_{k=1}^\infty \log(1-p^k)$ in closed form.

  3. “It’s clear that $\lim_{t\rightarrow \infty}P(t)=1$, because it becomes progressively less likely we’ll win as the number of Tails flipped accrues.” Could we be more mathematical on this?

Leave a Reply

Your email address will not be published. Required fields are marked *