I attended the 57th annual Allerton Conference on Communication, Control, and Computing in Monticello, IL. At the conference, I presented an invited talk in a session organized by Bin Hu, my former postdoc who is now at UIUC — so it was nice to see Bin again!

My talk was largely about the following recent paper (arxiv link) by my student Akhil Sundararajan, my postdoc Bryan Van Scoy, and me. The manuscript should get published at some point in 2020 once we make our final revisions. The paper describes how gradient-based distributed optimization algorithms can be parameterized and analyzed (in terms of their worst-case convergence rate) using simple semidefinite programs. In the paper, we study the case where the underlying graph is possibly time-varying (with uniformly bounded spectral gap), and we characterize the worst-case performance under any adversarially-varying graph. The benefit of this approach is that it enables rapid analysis, which means it also enables efficient design! Our paper also presents an “optimal” algorithm design (optimal in the sense that it has the fastest worst-case convergence rate) and a comparison between this algorithm and existing algorithms from the literature. Finally, we show that our approach likely produces tight bounds because we can construct worst-case sequences of graphs and functions that achieve the bound.