CDC’25 in Rio de Janeiro

I attended the 2025 Conference on Decision and Control in Rio de Janeiro. The photo above is a panoramic view from the statue Christ the Redeemer. This year, I had one paper with my student Mohammad and another with my colleague and former postdoc Bryan Van Scoy. Here are short summaries of the works and links to papers and slides.

  • The fastest known globally convergent first-order method for minimizing locally quadratic smooth strongly convex functions by Bryan Van Scoy and me (slides). This paper was also accepted as a dual submission to the IEEE Control Systems Letters.

    A convergence rate is said to be minimax optimial if it is both (1) a lower bound: no matter what algorithm one uses, there always exist some $f\in S$ that converges at least this slowly, and (2) an upper bound: there exists an algorithm such that no matter what $f\in S$ is chosen, it always converges at least this fast. Minimax optimality is a central issue in optimization and has been established for many common classes of functions. For example, it is known that:

    • For a quadratic function whose Hessian’s condition number is at most $\kappa$, the minimax optimal rate is $\rho_{Q} = \frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}$ and this rate is achieved by Polyak’s Heavy Ball method (HB).
    • For a smooth strongly convex function with condition number at most $\kappa$, the minimax optimal rate is $\rho_{F} = 1-\frac{1}{\sqrt{\kappa}}$ and this rate is achieved by the Triple Momentum Method (TM).
    In this work, we consider smooth strongly convex functions. We show that under the mild additional condition of twice differentiability at the optimal point, a new, faster algorithm emerges, which we call $C^2$-Momentum (C2M). We prove that C2M is globally convergent and that its worst-case convergence rate is strictly faster than that of TM, with no additional computational cost. The faster rate is roughly $1-\sqrt{\frac{2}{\kappa}}$. We validate our theoretical findings with numerical examples, demonstrating that C2M outperforms TM when the objective function is twice continuously differentiable. Interestingly, Polyak showed that HB is locally convergent for this class at the fast rate of $\rho_{Q}$. Our proposed method has a slightly slower rate, but is globally convergent.

  • State estimation for linear systems with non-Gaussian measurement noise via dynamic programming by Mohammad H. Yoosefian Nooshabadi and me (slides). In this paper, we study the problem of state estimation for linear systems with non-Gaussian measurement noise. It is known that:

    • The Kalman filter (KF) minimizes the mean squared error (MSE) when the noise is Gaussian.
    • KF is the best linear unbiased estimator (BLUE) when the noise is non-Gaussian.
    • The particle filter minimizes the MSE in the asymptotic limit of infinitely many particles, but it is typically computationally expensive.
    Many alternatives to the particle filter have been proposed for the non-Gaussian case, but they tend to also be computationally expensive. In this paper, we propose a nonlinear filter based on approximately maximizing the a posteriori probability (MAP). Its computational footprint is similar to that of KF, but compares favorably to KF and to many state-of-the-art nonlinear filters, even when the comparison is made based on MSE.
Congratulations to the organizers for putting on a great conference!