
It was an honor and a privilege to be one of eight invited speakers at the bi-annual NCCR Automation Symposium. The symposium was hosted in-person at ETH Zürich in May 2022.
The theme of this symposium was “Systems Theory of Algorithms”. As explained on the symposium website, many iterative algorithms in optimization, games, and learning can be viewed as dynamical systems evolving over time with inputs (real-time signals, historic data samples, or random seeds), outputs (decision variables or algorithm iterates), and uncertainties (noise or unknown problem parameters). The dynamical systems perspective on such algorithms is key in future automation, AI, and socio-technical systems, where challenges arise from the analysis and design of algorithms

- interconnected through inputs and outputs (e.g., machine learning pipelines or optimization-based feedback controllers),
- operating online (e.g., running algorithms with streaming data inputs),
- in resource-constrained environments (e.g., real-time embedded systems or big data settings),
- interconnected with humans in the loop (e.g., recommendation systems or autonomous driving),
- featuring self-interested decision makers that need to share a common limited resource (e.g., infrastructure networks),
- that are robust in face of severe uncertainty and disturbances (e.g., through exogenous inputs, intrinsic randomness, or a non-stationary environment).
I was thrilled to participate in this symposium, as my research in recent years has largely revolved around the analysis and design of optimization algorithms via a dynamical systems viewpoint. I strongly believe that this is the best and most systematic way to approach the difficult problem of selecting and tuning algorithms for a wide variety of uses. In my talk, I gave an overview of three facets of this problem, each of which I approached using tools from robust control theory:
- An overview of the dynamical systems viewpoint. For the most recent summary of this work, take a look at my recent (May 2022!) Control Systems Magazine article on the topic. You may also be interested in our original SIOPT paper that first described the idea.
- A summary of some recent work with Bryan Van Scoy that explores the trade-off between worst-case convergence rate and sensitivity to noise. As with control systems, optimization algorithm convergence speed typically comes at the expense of robustness to noise or other uncertainty. I also wrote an accessible blog post that summarized this work.
- A summary of our work on algorithm analysis and design for distributed optimization. Distributed optimization combines aspects of ordinary gradient-based optimization with consensus over a network. Consequently, the algorithms (and ensuing analysis) are typically more complicated than in either setting separately. This problem greatly benefits from a systems viewpoint because the complicated analysis can be automated.
On a personal note, this was my first time visiting Switzerland, and I had a wonderful time. Many thanks to Florian Dörfler for the invitation and for organizing what turned out to be a very enjoyable and productive visit. Zürich is a beautiful city, and a hope to visit again soon! I took the photo at the top of the page during one of my walks around town.