
I attended the 2022 Conference on Decision and Control in Cancún, Mexico. The photo above is of the Mayan ruins in Tulum, about 100 minutes drive from Cancún. This was the first time since 2019 that CDC was held in-person, and it was nice to finally attend a large conference in-person after so many years of pandemic-induced virtual meetings. This year, I had two papers at CDC, both co-authored with my colleague and former postdoc Bryan Van Scoy. Here are short summaries of each talk and links to the paper and slides.
A universal decomposition for distributed optimization algorithms by Bryan and me (slides). This paper was also accepted as a dual submission to the IEEE Control Systems Letters.
In the distributed optimization problem for a multi-agent system, each agent knows a local function and must find a minimizer of the sum of all agents’ local functions by performing a combination of local gradient evaluations and communicating information with neighboring agents. We prove that every distributed optimization algorithm can be factored into a centralized optimization method and a second-order consensus estimator, effectively separating the “optimization” and “consensus” tasks. We illustrate this fact by providing the decomposition for many recently proposed distributed optimization algorithms. Conversely, we prove that any optimization method that converges in the centralized setting can be combined with any second-order consensus estimator to form a distributed optimization algorithm that converges in the multi-agent setting.
This is a nice result because it confirms something we had long suspected: the well-studied consensus and optimization paradigms can be combined to study the distributed optimization problem. Our hope is that our decomposition may lead to a more systematic algorithm design methodology. Although our results show that there is a structural decomposition that allows one problem to be split into instances of the others, it does not provide performance guarantees. In other words, combining a fast optimization algorithm with a fast consensus method does not guarantee that we will obtain a performant algorithm for distributed optimization. Indeed, the opposite tends to happen. As we point out in the paper, distributed optimization can be viewed either as robust optimization (the consensus can be viewed as a disturbance), or robust consensus (the optimization can be viewed as a disturbance). This suggests that robust versions of optimization and consensus algorithms would be the best candidates as ingredients for creating useful distributed optimization algorithms.
Absolute stability via lifting and interpolation (slides)
We revisit the classical problem of absolute stability; assessing the robust stability of a given linear time-invariant (LTI) plant in feedback with a nonlinearity belonging to some given function class. Standard results typically take the form of sufficient conditions on the LTI plant, the least conservative of which are based on O’Shea–Zames–Falb multipliers. We present an alternative analysis based on lifting and interpolation that directly constructs a Lyapunov function that certifies absolute stability without resorting to frequency-domain inequalities or integral quadratic constraints. In particular, we use linear matrix inequalities to search over Lyapunov functions that are quadratic in the iterates and linear in the corresponding function values of the system in a lifted space. We show that our approach recovers state-of-the-art results on a set of benchmark problems.
Our proposed approach does not improve on the state-of-the-art with regards to the conservatism of the robustness certificates we can obtain or on the computational complexity required to obtain them. However, our approach provides an intuitive Lyapunov certificate of robust performance that is intuitively interpretable. This is not possible when using the classical approach of integral quadratic constraints (IQCs). Moreover, we show how our lifting and interpolation framework can be flexibly adapted to solve a variety of robust control problems such as robust exponential stability, and robust H2 performance.