One duck
We will solve this problem by starting with a simpler one and building up to the full problem. Let’s consider a single duck on the grid of rocks. At each step, the duck moves randomly to one of the other nearby rocks. We can describe this process using a Markov chain. Each state is a different rock, and the directed edges are labeled with the transition probabilities. Here is a diagram of what the Markov chain looks like:

If we number the states in the same way as above, then the transition matrix for this Markov chain is:
\[
P = \begin{bmatrix}
0 & \tfrac12 & 0 & \tfrac12 & 0 & 0 & 0 & 0 & 0 \\
\tfrac13 & 0 & \tfrac13 & 0 & \tfrac13 & 0 & 0 & 0 & 0 \\
0 & \tfrac12 & 0 & 0 & 0 & \tfrac12 & 0 & 0 & 0 \\
\tfrac13 & 0 & 0 & 0 & \tfrac13 & 0 & \tfrac13 & 0 & 0 \\
0 & \tfrac14 & 0 & \tfrac14 & 0 & \tfrac14 & 0 & \tfrac14 & 0 \\
0 & 0 & \tfrac13 & 0 & \tfrac13 & 0 & 0 & 0 & \tfrac13 \\
0 & 0 & 0 & \tfrac12 & 0 & 0 & 0 & \tfrac12 & 0 \\
0 & 0 & 0 & 0 & \tfrac13 & 0 & \tfrac13 & 0 & \tfrac13 \\
0 & 0 & 0 & 0 & 0 & \tfrac12 & 0 & \tfrac12 & 0 \\
\end{bmatrix}
\]The way to interpret this matrix is that $P_{ij}$ is the probability that we will transition from $i$ to $j$. This explains why the rows sum to 1 (the matrix is right-stochastic). We can express this fact neatly using matrix multiplication as: $P \mathbf{1} = \mathbf{0}$, where $\mathbf{0}$ and $\mathbf{1}$ are column vectors of all-zeros and all-ones, repectively. Mathematically, we can propagate probability distributions through this Markov chain via matrix multiplication. If we have some initial distribution $\mathbf{a}$ across the states, for example
\[
\mathbf{a}^\mathsf{T} = \begin{bmatrix} \tfrac12 & \tfrac12 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}
\]i.e. we are equally likely to be in states 1 or 2. Then at the next step, the probability distribution will transition $\mathbf{a}\to\mathbf{b}$ with
\[
\mathbf{b}^\mathsf{T} = \mathbf{a}^\mathsf{T}P = \begin{bmatrix} \tfrac16 & \tfrac14 & \tfrac16 & \tfrac14 & \tfrac16 & 0 & 0 & 0 & 0 \end{bmatrix}
\]So in the next step, we might be in states 1, 2, 3, 4, 5 with the associated probabilities above.
It’s important to distinguish states e.g. $\{1,2,\dots,9\}$ from distributions over states, which are the vectors of probabilities such as the $\mathbf{a}$ and $\mathbf{b}$ used above. In the case where a distribution across states is degenerate, i.e. concentrated entirely on a particular state $s$, then I’ll use the following notation to denote the corresponding distribution over states:
\[
\left( \mathbf{e}_s\right)_i = \begin{cases} 1 & \text{if } i=s \\ 0 & \text{otherwise} \end{cases}
\]So, for example, $\mathbf{e}_2^\mathsf{T} = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$.
Stopping time
We are interested in the expected hitting time, which is the number of steps it will take on average to travel from some initial state $s$ to some target state $t \in \mathcal{T}$. To keep this general, I’m assuming there can be multiple target states, indicated by the set $\mathcal{T}$. Let’s define $\mathbf{q}_s$ to be the expected hitting time to get from state $s$ to any state in our terminal set $\mathcal{T}$. It turns out that $\mathbf{q}$ satisfies the recursive relationship:
\[
\mathbf{q}_s = \begin{cases}
0 & \text{if }s \in \mathcal{T} \\
1 + \sum_{i} P_{si} \mathbf{q}_i & \text{otherwise}
\end{cases}
\]The first case is clear: if we start in the terminal set, then we have already arrived, so the hitting time is zero. If we are outside the terminal set, then the expected hitting time will be $1$ plus the weighted sum of the hitting times of wherever we end up after the next transition. The recursion above is essentially the Bellman equation from dynamic programming.
Define the vector $\mathbf{t}_i = \begin{cases} 0 & \text{if }i \in \mathcal{T}\\ 1 & \text{otherwise} \end{cases}$.
We can rewrite the equation above in concise vector form as:
\[
\mathbf{q} = \textrm{diag}(\mathbf{t}) \left( \mathbf{1} + P \mathbf{q} \right)
\]Using the fact that $\textrm{diag}(\mathbf{t})\mathbf{1} = \mathbf{t}$, we can further simplify and obtain:
\[
\left( I-\textrm{diag}(\mathbf{t}) P \right) \mathbf{q} = \mathbf{t}
\]For the case of one duck, if we set the terminal set to be $\mathcal{T} = \{5\}$, then we substitute $\mathbf{t} = \begin{bmatrix} 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 \end{bmatrix}^\mathsf{T}$ above and obtain:
\[
\begin{bmatrix}
1 & -\tfrac12 & 0 & -\tfrac12 & 0 & 0 & 0 & 0 & 0 \\
-\tfrac13 & 1 & -\tfrac13 & 0 & -\tfrac13 & 0 & 0 & 0 & 0 \\
0 & -\tfrac12 & 1 & 0 & 0 & -\tfrac12 & 0 & 0 & 0 \\
-\tfrac13 & 0 & 0 & 1 & -\tfrac13 & 0 & -\tfrac13 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & -\tfrac13 & 0 & -\tfrac13 & 1 & 0 & 0 & -\tfrac13 \\
0 & 0 & 0 & -\tfrac12 & 0 & 0 & 1 & -\tfrac12 & 0 \\
0 & 0 & 0 & 0 & -\tfrac13 & 0 & -\tfrac13 & 1 & -\tfrac13 \\
0 & 0 & 0 & 0 & 0 & -\tfrac12 & 0 & -\tfrac12 & 1 \\
\end{bmatrix}
\begin{bmatrix}\mathbf{q}_1\vphantom{\tfrac12} \\ \mathbf{q}_2\vphantom{\tfrac12} \\ \mathbf{q}_3\vphantom{\tfrac12} \\
\mathbf{q}_4\vphantom{\tfrac12} \\ \mathbf{q}_5\vphantom{\tfrac12} \\ \mathbf{q}_6\vphantom{\tfrac12} \\
\mathbf{q}_7\vphantom{\tfrac12} \\ \mathbf{q}_8\vphantom{\tfrac12} \\ \mathbf{q}_9\vphantom{\tfrac12} \end{bmatrix}
= \begin{bmatrix}
1\vphantom{\tfrac12} \\1\vphantom{\tfrac12} \\1\vphantom{\tfrac12} \\1\vphantom{\tfrac12} \\
0\vphantom{\tfrac12} \\1\vphantom{\tfrac12} \\1\vphantom{\tfrac12} \\1\vphantom{\tfrac12} \\1\vphantom{\tfrac12}
\end{bmatrix}
\]Inverting this matrix and solving for $\mathbf{q}$, here is the result in array form:
\[
\begin{array}{|c|c|c|} \hline
\mathbf{q}_1 = 6 & \mathbf{q}_2 = 5 & \mathbf{q}_3 = 6 \\ \hline
\mathbf{q}_4 = 5 & \mathbf{q}_5 = 0 & \mathbf{q}_6 = 5 \\ \hline
\mathbf{q}_7 = 6 & \mathbf{q}_8 = 5 & \mathbf{q}_9 = 6 \\ \hline
\end{array}
\]So if we start at node 1, it will take us on average 6 moves to reach node 5. It is by no means obvious that the expected hitting times should be integers, since they represent an average path length over an infinite number of possible paths. They just turn out to be integers in this case. The symmetry observed here also makes sense, and we could leverage it to simplify the problem to a Markov chain with only 3 states, but I’ll leave that as an exercise to the reader! (the reason I didn’t do this from the start is that I want to generalize easily to the multi-duck case, and the symmetries there are more complicated).
What about the variant where we start at the origin but we want to know how much time it will take us to return to the origin? In this case, we simply start counting from our second move, and we add 1 to the answer to account for the move we skipped. If we start at node 5, i.e. our initial distribution is $\mathbf{e}_5^\mathsf{T}$, then our distribution on the second turn is $\mathbf{e}_5^\mathsf{T} P$, and so the expected number of moves to start at 5 and return to 5 is: $\mathbf{e}_5^\mathsf{T} P \mathbf{q} + 1 = 6$.
Two ducks
At first glance, the two-duck version might seem much more challenging than the on-duck version. As we shall see, the problem is essentially the same once we look at it the right way. The key is to imagine a Markov chain where instead of the states being $\{1,2,\dots,9\}$, the states are $\{(1,1), (1,2), \dots, (9,9)\}$. In other words, there are 81 states, consisting of all possible pairs $(s_1,s_2)$, where $s_i \in \{1,2,\dots,9\}$ is the position of duck $i$.
What would be the transition matrix for such a Markov chain? The transition probability $(a_1, a_2) \to (b_1, b_2)$ is simply $P_{a_1,b_1} P_{a_2,b_2}$, i.e. the product of the respective transition probabilities for each duck. This means that if we order our states according to the first duck first (lexicographical ordering), then the transition matrix for this augmented Markov chain will be $P\otimes P$, where $\otimes$ is the Kronecker product.
So we can solve the problem in a very similar fashion. This time, our terminal set consists of all the nodes $(s_1, s_2)$ where $s_1=s_2$. There are 9 such nodes, and we can form the associated $\mathbf{t}$ by evaluating $\mathbf{t} = \mathbf{1}-\textrm{vec}(I)$, where $\mathrm{vec}$ is the vectorized version of the matrix, which you obtain by enumerating the columns. For example:
\[
\mathrm{vec} \begin{bmatrix}a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{bmatrix} =
\begin{bmatrix}
a_{11} \\ a_{21} \\ a_{31} \\ a_{12} \\ a_{22} \\ a_{32} \\ a_{13} \\ a_{23} \\ a_{33} \end{bmatrix}
\]the new equation for $\mathcal{q}$ is $\bigl( I-\textrm{diag}(\mathbf{t}) (P\otimes P) \bigr) \mathbf{q} = \mathbf{t}$ and the solution to the problem where you start at $(5,5)$ and want to know the expect time it takes for us to reach the terminal set (where the two ducks are on the same rock) can be computed following the same logic as in the one-duck case. That is, $\mathbf{e}_{(5,5)}^\mathsf{T} (P\otimes P) \mathbf{q} + 1$.
A cautionary note
While it may seem like this is straightforward algebra, there is a major issue I neglected to discuss: invertibility. When we solve the equation for $\mathbf{q}$ in the one-duck case, the equation looks like $A\mathbf{q} = \mathbf{t}$, and the solution is $\mathbf{q} = A^{-1}\mathbf{t}$. But what if $A$ isn’t invertible? This corresponds to cases where the Markov chain is not connected. So if you start on one island and the terminal set is on another island, you’ll never get there; the stopping time is infinite. That doesn’t happen with one duck, but it does happen with two ducks! This is because every time a duck moves to a new stone, the parity of the stone changes from odd to even or vice versa. So if one duck started on rock 1 and the other started on rock 2, they would never meet.
From a linear algebra standpoint, it just means that even though the matrix is not invertible, if we restrict our attention to the “island” (read: subspace) where the two ducks have the same parity, then the associated transition matrix for that island will be invertible and everything will be fine. From a practical standpoint, the equation above can still be solved; we just need to replace the inverse by a pseudoinverse, and ignore all the components of $\mathbf{q}$ for which the two components have different parity. More on this later…
Computation
I computed the result using the following Matlab code:
% transition matrix for one duck
P = [ 0 1/2 0 1/2 0 0 0 0 0
1/3 0 1/3 0 1/3 0 0 0 0
0 1/2 0 0 0 1/2 0 0 0
1/3 0 0 0 1/3 0 1/3 0 0
0 1/4 0 1/4 0 1/4 0 1/4 0
0 0 1/3 0 1/3 0 0 0 1/3
0 0 0 1/2 0 0 0 1/2 0
0 0 0 0 1/3 0 1/3 0 1/3
0 0 0 0 0 1/2 0 1/2 0 ];
T = kron(P,P); % transition matrix
t = 1-vec(eye(9)); % terminal states
% starting distribution
s = kron([0 0 0 0 1 0 0 0 0],[0 0 0 0 1 0 0 0 0])';
% compute stopping time as a rational number
stop_time = s'*T*( (eye(81)-diag(t)*T)\t ) + 1
rats(stop_time)
The resulting expected stopping time was $\frac{363}{74}$, or approximately $4.905$ steps. We can also find an approximate answer via simulation. Here is what you get by simulating one million trials:

The staggered decrease of the probabilities is a real effect; it’s not a result of approximation error!
Many ducks
If we have $n$ ducks, we can clearly just generalize the approach used above. This time, the transition matrix will be $\underbrace{P\otimes \cdots \otimes P}_{n\text{ times}}$. The issue here is that our transition matrix will be quite large: $9^n \times 9^n$, to be precise. One way to shrink the number of states is to return to the idea of parity. The reason our transition matrix is large is because it describes all transitions between all possible duck configurations. Since we know all ducks start on the same node and that parity will be preserved, we only need to worry about the subset of states for which all ducks have the same parity. Namely, there will be $5^n$ odd states and $4^n$ even states. So we can restrict ourselves to a smaller $(5^n+4^n)\times(5^n+4^n)$ transition matrix. This is still large, but it’s an improvement. We will also leverage the fact that the transition matrix will be sparse, which will help with computation.
After all this work, we end up with the following result (for up to 6 ducks):

The values are as follows:
Number of ducks |
Expected rendez-vous time |
2 |
4.9054 |
3 |
18.4360 |
4 |
66.7420 |
5 |
237.3955 |
6 |
825.3364 |
It’s clear the expected rendez-vous time grows exponentially, but I haven’t found a practical way to approximate it or bound it. Unfortunately, it’s pretty difficult to extend my approach beyond $n=6$ ducks. To give you an idea of scale, our naive transition matrix would have had $9^6=531,441$ rows and columns. After our reduction procedure, we’re down to $5^6+4^6=19,721$. The transition matrix is sparse (about 98.5% of its entries are zeros), but that still leaves about 6 million nonzero entries, which makes computing $\mathbf{q}$ a challenge.
Further reductions are possible if we leverage symmetries. Let’s say we have 2 ducks. If we don’t do any reductions, there are $9^2=81$ states. If we reduce using even/odd parity, we drop to $5^2+4^2 = 41$.
- From a stopping time standpoint, the only thing that matters is how many ducks are on each stone; it doesn’t matter which ducks are on which stones. This immediately allows us to reduce $5^n+4^n$ to ${n+4 \choose 4} + {n+3\choose 3}$, which is a dramatic improvement. In the two-duck case, this brings us to $25$.
- Deeper symmetries occur as well. For example, a pair of ducks on 1 and 3 is equivalent to a pair of ducks on 7 and 9 by rotational symmetry. In the two-duck case, this brings us to $8$ states. Counting and bookkeeping these states might require some group theory…
Ultimately, these reductions would have a dramatic effect, reducing the scaling from exponential to polynomial. I didn’t implement any of these additional reductions because at this point (up to $n=6$) the exponential trend was clear. I didn’t think that extending the trend further would be particularly enlightening.