10:00–10:20 
Introduction: Optimization algorithms as robust controllers
Laurent Lessard, Northeastern University ★
Abstract: [more]
Downloads: [slides]
Firstorder methods provide robust and efficient solutions to largescale optimization problems. Recent advances in the analysis and design of firstorder methods have been fueled by tools from controls, including integral quadratic constraints and multipliers from robust control. Similar advances have been made in the optimization community through the (related) performance estimation framework. Together, these tools have transformed the way in which we analyze and design optimization methods. This talk will provide an overview of these tools and set the stage for the remainder of the session.

10:20–10:40 
Optimization Algorithm Synthesis Based on Integral Quadratic Constraints: A Tutorial
Carsten W. Scherer, University of Stuttgart ★
Christian Ebenbauer, RWTH Aachen University
Tobias Holicki, University of Stuttgart
Abstract: [more]
Downloads: [paper] [slides] [code]
We expose in a tutorial fashion the mechanisms which underlie the synthesis of optimization algorithms based on dynamic integral quadratic constraints. We reveal how these tools from robust control allow to design accelerated gradient descent algorithms with optimal guaranteed convergence rates by solving smallsized convex semidefinite programs. It is shown that this extends to the design of extremum controllers, with the goal to regulate the output of a general linear closedloop system to the minimum of an objective function.
Numerical experiments illustrate that we can not only recover gradient decent and the triple momentum variant of Nesterov’s accelerated first order algorithm, but also automatically synthesize optimal algorithms even if the gradient information is passed through nontrivial dynamics, such as timedelays.

10:40–11:00 
A Tutorial on a LyapunovBased Approach to the Analysis of Iterative Optimization Algorithms
Bryan Van Scoy, Miami University
Laurent Lessard, Northeastern University ★
Abstract: [more]
Downloads: [paper] [slides]
Iterative gradientbased optimization algorithms are widely used to solve difficult or largescale optimization problems. There are many algorithms to choose from, such as gradient descent and its accelerated variants such as Polyak’s Heavy Ball method or Nesterov’s Fast Gradient method. It has long been observed that iterative algorithms can be viewed as dynamical systems, and more recently, as robust controllers. Here, the “uncertainty” in the dynamics is the gradient of the function being optimized. Therefore, worstcase or averagecase performance can be analyzed using tools from robust control theory, such as integral quadratic constraints (IQCs). In this tutorial paper, we show how such an analysis can be carried out using an alternative Lyapunovbased approach. This approach recovers the same performance bounds as with IQCs, but with the added benefit of constructing a Lyapunov function.

11:00–11:20 
A Tutorial on the Structure of Distributed Optimization Algorithms
Bryan Van Scoy, Miami University ★
Laurent Lessard, Northeastern University
Abstract: [more]
Downloads: [paper] [slides]
We consider the distributed optimization problem for a multiagent system. Here, multiple agents cooperatively optimize an objective by sharing information through a communication network and performing computations. In this tutorial, we provide an overview of the problem, describe the structure of its algorithms, and use simulations to illustrate some algorithmic properties based on this structure.

11:20–11:40 
Interpolation Constraints for Computing WorstCase Bounds in Performance Estimation Problems
Anne Rubbens, UCLouvain
Nizar Bousselmi, UCLouvain
Sébastien Colla, UCLouvain
Julien M. Hendrickx, UCLouvain ★
Abstract: [more]
Downloads: [paper] [slides]
The Performance Estimation Problem (PEP) approach consists in computing worstcase performance bounds on optimization algorithms by solving an optimization problem: one maximizes an error criterion over all initial conditions allowed and all functions in a given class of interest. The maximal value is then a worstcase bound, and the maximizer provides an example reaching that worst case. This approach was introduced for optimization algorithms but could in principle be applied to many other contexts involving worstcase bounds. The key challenge is the representation of infinitedimensional objects involved in these optimization problems such as functions, and complex or nonconvex objects as linear operators and their powers, networks in decentralized optimization etc. This challenge can be resolved by interpolation constraints, which allow representing the effect of these objects on vectors of interest, rather than the whole object, leading to tractable finite dimensional problems. We review several recent interpolation results and their implications in obtaining of worstcase bounds via PEP.

11:40–12:00 
On Fundamental Proof Structures in FirstOrder Optimization
Baptiste Goujaud, Ecole Polytechnique ★
Aymeric Dieuleveut, Ecole Polytechnique
Adrien Taylor, Inria/Ecole Normale Supérieure
Abstract: [more]
Downloads: [paper] [slides]
Firstorder optimization methods have attracted a lot of attention due to their practical success in many applications, including in machine learning. Obtaining convergence guarantees and worstcase performance certificates for firstorder methods have become crucial for understanding ingredients underlying efficient methods and for developing new ones. However, obtaining, verifying, and proving such guarantees is often a tedious task. Therefore, a few approaches were proposed for rendering this task more systematic, and even partially automated. In addition to helping researchers finding convergence proofs, these tools provide insights on the general structures of such proofs. We aim at presenting those structures, showing how to build convergence guarantees for firstorder optimization methods.
