{"id":783,"date":"2016-07-16T13:22:32","date_gmt":"2016-07-16T18:22:32","guid":{"rendered":"http:\/\/www.laurentlessard.com\/bookproofs\/?p=783"},"modified":"2016-07-18T03:43:13","modified_gmt":"2016-07-18T08:43:13","slug":"random-walk-with-endpoints","status":"publish","type":"post","link":"https:\/\/laurentlessard.com\/bookproofs\/random-walk-with-endpoints\/","title":{"rendered":"Random walk with endpoints"},"content":{"rendered":"<p>The <a href=\"http:\/\/fivethirtyeight.com\/features\/how-long-will-you-be-stuck-playing-this-bar-game\/\">Riddler post<\/a> for today is about a bar game where you flip a coin and move forward or backward depending on the result. Here is the problem:<\/p>\n<blockquote><p>\nConsider a hot new bar game. It\u2019s played with a coin, between you and a friend, on a number line stretching from negative infinity to positive infinity. (It\u2019s a very, very long bar.) You are assigned a winning number, the negative integer -X, and your friend is assigned his own winning number, a positive integer, +Y. A marker is placed at zero on the number line. Then the coin is repeatedly flipped. Every time the coin lands heads, the marker is moved one integer in a positive direction. Every time the coin lands tails, the marker moves one integer in a negative direction. You win if the coin reaches -X first, while your friend wins if the coin reaches +Y first. (Winner keeps the coin.)<\/p>\n<p>How long can you expect to sit, flipping a coin, at the bar? Put another way, what is the expected number of coin flips in a complete game?\n<\/p><\/blockquote>\n<p>Here is my solution:<br \/>\n<a href=\"javascript:Solution('soln_barwalk','toggle_barwalk')\" id=\"toggle_barwalk\">[Show Solution]<\/a><\/p>\n<div id=\"soln_barwalk\" style=\"display: none\">\n<p>This problem is famously known as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Gambler%27s_ruin\">Gambler&#8217;s ruin<\/a> and it&#8217;s an example of a one-dimensional <a href=\"https:\/\/en.wikipedia.org\/wiki\/Random_walk\">random walk<\/a>. In the typical Gambler&#8217;s ruin setup, you flip a coin repeatedly and either gain \\$1 or lose \\$1 depending on the outcome. You have a fixed starting budget and a desired goal amount. You keep playing until either you have reached your goal amount, or you run out of money. The bar game we&#8217;re looking at here is precisely Gambler&#8217;s ruin, but shifted so that you start with \\$0 and end when you&#8217;re either up \\$Y or you&#8217;re \\$X in debt.<\/p>\n<p>Here is one possible solution. Let&#8217;s define $E_n$ to be the expected number of coin flips in a complete game assuming the marker starts at location $n$ on the number line. We would like to find $E_0$. Suppose that at each flip, we move forward with probability $p$ and backward with probability $q$ (where $p+q=1$). If we are currently at $n$, then with probability $p$, we came from $n-1$ and with probability $q$ we came from $n+1$. Either way, we incurred one additional flip. We can therefore write the following recurrence relations for the expected number of coin flips:<\/p>\n<p>\\begin{align}<br \/>\nE_{-X} &#038;= 0 \\\\<br \/>\nE_n &#038;= p E_{n-1} + q E_{n+1} + 1 \\qquad\\text{for }-X < n < Y\\\\\nE_Y &#038;= 0\n\\end{align}\n\nIn matrix form, these equations look like:\n\n\\[\n\\begin{bmatrix}\n1 &#038; -q &#038; 0 &#038; \\dots &#038; 0 \\\\\n-p &#038; 1 &#038; -q &#038; \\ddots &#038; \\vdots \\\\\n0 &#038; -p &#038; 1 &#038; \\ddots &#038; 0 \\\\\n\\vdots &#038; \\ddots &#038; \\ddots &#038; \\ddots &#038; -q \\\\\n0 &#038; \\dots &#038; 0 &#038; -p &#038; 1\n\\end{bmatrix}\n\\begin{bmatrix}\nE_{-X+1} \\\\ E_{-X+2} \\\\ \\vdots \\\\ E_{Y-2} \\\\ E_{Y-1}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 \\\\ 1 \\\\ \\vdots \\\\ 1 \\\\ 1\n\\end{bmatrix}\n\\]\n\nThis system of equations has a nice structure;  the matrix on the left is both <a href=\"https:\/\/en.wikipedia.org\/wiki\/Tridiagonal_matrix\">tridiagonal<\/a> and <a href=\"https:\/\/en.wikipedia.org\/wiki\/Toeplitz_matrix\">Toeplitz<\/a>. One can solve this system of equations <a href=\"https:\/\/en.wikipedia.org\/wiki\/Tridiagonal_matrix_algorithm\">using a specialized form of Gaussian elimination<\/a>, for example.<\/p>\n<p>Although the equations have a closed-form solution, it is pretty messy. Instead, I&#8217;ll show how you can derive the answer by hand in the case $p=q=\\tfrac{1}{2}$. The last equation can be solved for $E_{Y-1}$ in terms of $E_{Y-2}$:<\/p>\n<p>\\[<br \/>\nE_{Y-1} = \\tfrac{1}{2} E_{Y-2} + 1<br \/>\n\\]<\/p>\n<p>We can then substitute this into the second equation and solve for $E_{Y-2}$ in terms of $E_{Y-3}$:<\/p>\n<p>\\[<br \/>\nE_{Y-2} = \\tfrac{2}{3} E_{Y-3} + 2<br \/>\n\\]<\/p>\n<p>Substitute into the third equation and solve for $E_{Y-3}$ in terms of $E_{Y-4}$:<\/p>\n<p>\\[<br \/>\nE_{Y-3} = \\tfrac{3}{4} E_{Y-4} + 3<br \/>\n\\]<\/p>\n<p>The pattern is now clear (you can prove it using induction if you like). Continuing in this fashion until we get to the starting point,<\/p>\n<p>\\[<br \/>\nE_{1} = (1- \\tfrac{1}{Y}) E_{0} + Y &#8211; 1<br \/>\n\\]<\/p>\n<p>We can do something similar but starting from the other end. e.g. solving for $E_{-X+1}$ in terms of $E_{-X+2}$, and then $E_{-X+2}$ in terms of $E_{-X+3}$, and so on. This time, we obtain the result:<\/p>\n<p>\\[<br \/>\nE_{-1} = (1- \\tfrac{1}{X}) E_0 + X &#8211; 1<br \/>\n\\]<\/p>\n<p>We can now combine the fruits of our labor with the only equation we haven&#8217;t used yet, which relates $E_0$, $E_1$, and $E_{-1}$. The result is that:<\/p>\n<p>\\begin{align}<br \/>\nE_0 &#038;= \\tfrac{1}{2} E_{-1} + \\tfrac{1}{2} E_1 + 1 \\\\<br \/>\n&#038;= \\tfrac{1}{2}\\left( (1- \\tfrac{1}{X}) E_0 + X &#8211; 1 \\right) + \\tfrac{1}{2} \\left( (1 &#8211; \\tfrac{1}{Y}) E_0 + Y &#8211; 1 \\right) + 1\\\\<br \/>\n&#038;= \\tfrac{1}{2}\\left( 2- \\tfrac{1}{X} &#8211; \\tfrac{1}{Y} \\right) E_0 + \\tfrac{1}{2}(Y + X)<br \/>\n\\end{align}<\/p>\n<p>Rearranging and solving for $E_0$, we obtain:<\/p>\n<p style=\"text-align: center;\"><span style=\"background-color: #AFC8E6; padding: 25px 10px 25px 10px;\">$\\displaystyle<br \/>\nE_0 = \\frac{ Y + X }{ \\tfrac{1}{X} + \\tfrac{1}{Y} } = XY<br \/>\n$<\/span><\/p>\n<p>A similar approach can be used to compute the probability that each player will win. To do this, let $P^Y_n$ be the probability that we&#8217;ll reach $Y$ first given that we are currently at $n$, and similarly define $P^X_n$. Then the recursions look like:<\/p>\n<p>\\begin{align}<br \/>\nP^Y_{-X} &#038;= 0 \\\\<br \/>\nP^Y_n &#038;= p P^Y_{n-1} + q P^Y_{n+1} \\qquad\\text{for }-X < n < Y\\\\\nP^Y_Y &#038;= 1\n\\end{align}\n\nWritten as matrices, the only difference between this problem and that of finding expectation is the right-hand side of the equation! Using a similar recursive approach, we can solve the equations and we obtain:\n\n\n\n<p style=\"text-align: center;\"><span style=\"background-color: #AFC8E6; padding: 25px 10px 25px 10px;\">$\\displaystyle<br \/>\nP_0^X = \\frac{Y}{X+Y} \\qquad\\text{and}\\qquad P_0^Y = \\frac{X}{X+Y}<br \/>\n$<\/span><\/p>\n<p>So, practically speaking, if we must travel twice as far on the number line as our opponent, we are half as likely to win the game. Just for fun, I made some plots that show how the expected value and the probability of winning change if you&#8217;re not using a fair coin. The plots show the case where $X+Y = 50$. For example, the 40% curve means that the coin flips in your favor 40% of the time.<\/p>\n<p><a href=\"https:\/\/laurentlessard.com\/bookproofs\/wp-content\/uploads\/2016\/07\/bargame_E.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/laurentlessard.com\/bookproofs\/wp-content\/uploads\/2016\/07\/bargame_E.png\" alt=\"bargame_E\" width=\"898\" height=\"412\" class=\"aligncenter size-full wp-image-797\" srcset=\"https:\/\/laurentlessard.com\/bookproofs\/wp-content\/uploads\/2016\/07\/bargame_E.png 898w, https:\/\/laurentlessard.com\/bookproofs\/wp-content\/uploads\/2016\/07\/bargame_E-300x138.png 300w, https:\/\/laurentlessard.com\/bookproofs\/wp-content\/uploads\/2016\/07\/bargame_E-768x352.png 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/laurentlessard.com\/bookproofs\/wp-content\/uploads\/2016\/07\/bargame_P.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/laurentlessard.com\/bookproofs\/wp-content\/uploads\/2016\/07\/bargame_P.png\" alt=\"bargame_P\" width=\"898\" height=\"412\" class=\"aligncenter size-full wp-image-798\" \/><\/a><\/p>\n<p>If you&#8217;re interested in a derivation of the solution for the case $p\\ne \\tfrac{1}{2}$, there is a well-written set of notes <a href=\"http:\/\/web.mit.edu\/neboat\/Public\/6.042\/randomwalks.pdf\">here<\/a>.<\/p>\n<\/div>\n<p>and here is a much slicker solution, courtesy of <a href=\"https:\/\/www.math.wisc.edu\/~ross\/\">Daniel Ross<\/a>:<br \/>\n<a href=\"javascript:Solution('soln_barwalk2','toggle_barwalk2')\" id=\"toggle_barwalk2\">[Show Solution]<\/a><\/p>\n<div id=\"soln_barwalk2\" style=\"display: none\">\n<p>Define the random variables $\\{Z_i\\}$ and $S_k$ as:<\/p>\n<p>\\[<br \/>\nZ_i = \\begin{cases} 1 &#038; \\text{the }i^\\text{th}\\text{ flip is }H \\\\<br \/>\n-1 &#038; \\text{the }i^\\text{th}\\text{ flip is }T<br \/>\n\\end{cases}<br \/>\n\\qquad\\text{and}\\qquad S_k = \\sum_{i = 1}^k Z_i<br \/>\n\\]<\/p>\n<p>Also, let $T$ be the random variable corresponding to the duration of the game. Therefore, $S_T$ is a random variable whose only possible values are $-X$ with probability $r$, or $Y$ with probability $(1-r)$.  <\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Wald%27s_equation\">Wald&#8217;s equation<\/a> and <a href=\"http:\/\/math.stackexchange.com\/a\/442614\/1356\">Wald&#8217;s second equation<\/a> (sometimes also called the Blackwell-Girshick equation) give expressions for the mean and variance of random-length sums of identically distributed random variables:<\/p>\n<p>\\begin{align}<br \/>\n\\mathbb{E}(T) \\mathbb{E}(Z_1) &#038;= \\mathbb{E}(S_T)\\\\<br \/>\n\\mathbb{E}(T) \\mathrm{Var}(Z_1) &#038;= \\mathbb{E}\\left( (S_T\u2212 \\mathbb{E}(Z_1) T )^2 \\right)<br \/>\n\\end{align}<\/p>\n<p>Since $\\mathbb{E}(Z_1) = 0$, the first equation simplifies to $0 = -rX + (1-r)Y$. Solving for $r$, we obtain<\/p>\n<p>\\[<br \/>\nr = \\mathbb{P}(\\text{game ends at } &#8211; X) = \\frac{Y}{X+Y}<br \/>\n\\]<\/p>\n<p>Since $\\mathrm{Var}(Z_1) = 1$, the second equation is $\\mathbb{E}(T) = rX^2 + (1-r)Y^2$. Substituting the value we found for $r$, we obtain<\/p>\n<p>\\[<br \/>\n\\mathbb{E}(T) = \\frac{Y}{X+Y} X^2 + \\frac{X}{X+Y} Y^2 = XY<br \/>\n\\]<\/p>\n<p>and that&#8217;s it!<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The Riddler post for today is about a bar game where you flip a coin and move forward or backward depending on the result. Here is the problem: Consider a hot new bar game. It\u2019s played with a coin, between you and a friend, on a number line stretching from negative infinity to positive infinity. &hellip; <a href=\"https:\/\/laurentlessard.com\/bookproofs\/random-walk-with-endpoints\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Random walk with endpoints&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[7],"tags":[16,8,21,15,2],"class_list":["post-783","post","type-post","status-publish","format-standard","hentry","category-riddler","tag-dynamic-programming","tag-probability","tag-random-walk","tag-recursion","tag-riddler"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/posts\/783","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/comments?post=783"}],"version-history":[{"count":36,"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/posts\/783\/revisions"}],"predecessor-version":[{"id":823,"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/posts\/783\/revisions\/823"}],"wp:attachment":[{"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/media?parent=783"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/categories?post=783"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/laurentlessard.com\/bookproofs\/wp-json\/wp\/v2\/tags?post=783"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}