CDC’23 in Singapore

I attended the 2023 Conference on Decision and Control in Singapore. The photo above is a panoramic view from the top (57th floor) of the Marina Bay Sands hotel. This was my first time in Singapore, and probably the farthest I have ever traveled in my life! This year, I had one paper with my former student Mruganka Kashyap, who recently graduated, and I also co-organized a tutorial session with my colleague and former postdoc Bryan Van Scoy entitled “Analysis and Design of Optimization Algorithms Using Tools from Control Theory”. Here are short summaries of the works I was involved in and links to papers and slides. Continue reading

The Reverse Sweep

If you’ve been following the NBA playoffs, you will know that we are on the verge of a potentially historic event: the first ever reverse-sweep! The Boston Celtics will face the Miami Heat at Boston on Monday May 29th in game 7 of the Eastern Conference finals to determine which team will advance and face the Denver Nuggets in the NBA finals. The Heat won the first 3 games of the series, and then the Celtics won the following 3 games. If the Celtics win on Monday night, they will complete a “reverse sweep”. This has never happened in any best-of-seven series in NBA playoff history! But how rare is a reverse sweep, actually?

Continue reading

CDC’22 in Cancún

I attended the 2022 Conference on Decision and Control in Cancún, Mexico. The photo above is of the Mayan ruins in Tulum, about 100 minutes drive from Cancún. This was the first time since 2019 that CDC was held in-person, and it was nice to finally attend a large conference in-person after so many years of pandemic-induced virtual meetings. This year, I had two papers at CDC, both co-authored with my colleague and former postdoc Bryan Van Scoy. Here are short summaries of each talk and links to the paper and slides. Continue reading

Dissipativity and Algorithm Analysis

My new article titled: “The Analysis of Optimization Algorithms: A Dissipativity Approach” appeared in the June 2022 issue of the IEEE Control Systems Magazine. In the article, I explain how the behavior of iterative optimization algorithms can be viewed through the lens of energy dissipation. In mechanical systems, total energy is always conserved. So if some of that energy is dissipated as heat, what remains (kinetic plus potential) must decrease over time, and the system will eventually come to rest. Analogously, optimization algorithms can be shown to have an abstract notion of energy that dissipates over time, driving the system to come to rest (convergent behavior). In this blog post, I explain dissipativity theory and how it pertains to algorithm analysis!

Continue reading

NCCR Symposium: Systems Theory of Algorithms

It was an honor and a privilege to be one of eight invited speakers at the bi-annual NCCR Automation Symposium. The symposium was hosted in-person at ETH Zürich in May 2022.

The theme of this symposium was “Systems Theory of Algorithms”. As explained on the symposium website, many iterative algorithms in optimization, games, and learning can be viewed as dynamical systems evolving over time with inputs (real-time signals, historic data samples, or random seeds), outputs (decision variables or algorithm iterates), and uncertainties (noise or unknown problem parameters). The dynamical systems perspective on such algorithms is key in future automation, AI, and socio-technical systems, where challenges arise from the analysis and design of algorithms

Continue reading

Solving Wordle

Wordle is a popular daily word game that has taken the world by storm. The rules are simple: Guess the hidden 5-letter “solution word” using at most 6 guesses. Every time you make a guess, the letter tiles of your guess word are colored to provide a hint. Wordle was recently acquired by the New York Times, and remains free, though now at a new website.

What is the best first guess to use in Wordle? Is it always possible to win in 5 moves or fewer? what about 4 moves? How much harder is Hard Mode? In this article, I will answer all these questions and more!

Continue reading

Visit to Lund University

I had the honor and privilege to be a thesis opponent for the Ph.D. defense of Martin Heyden at Lund University’s Department of Automatic Control in Sweden. The picture on the right is with Anders Rantzer, Martin Heyden, and Richard Pates (Anders and Richard were Martin’s co-advisors). During my visit to Lund, I also gave a seminar on the topic of robust control and optimization algorithms. Lund University is one of the few places in the world to have a department dedicated solely to controls! Most other universities have controls experts spread across various departments (electrical, mechanical, industrial, aerospace, etc.).

Continue reading

Speed-robustness trade-off for optimization algorithms

Excited to share a new manuscript, by Bryan Van Scoy and me titled “The Speed-Robustness Trade-Off for First-Order Methods with Additive Gradient Noise”. It is now available on arXiv. I also have some slides here.


When solving an optimization problem, a ubiquitous approach is to use an iterative algorithm that finds progressively better candidate solutions until a satisfactory solution is obtained. In cases where the problem instances are very large, such as in machine learning applications, gradient-based iterative methods are the most popular. Think of the task of finding the point of highest elevation in a vast landscape. At each step, you measure the slope of the terrain, and move in the direction of steepest increase. Since we don’t know what the terrain shape will be ahead of time, we want algorithms that work well for a wide variety of terrains. For this reason, a popular way of evaluating algorithms is to look at their worst-case convergence rate on a given class of functions. For example, we might be solving a large linear regression problem, so we know ahead of time that the function we are optimizing is a convex quadratic. Much work has gone into finding algorithms that are “optimal” for a given class of functions. For example, we know that the Heavy Ball method is optimal for convex quadratics and the Triple Momentum Method is optimal for smooth strongly convex functions.

Continue reading

New preprint: Robust stabilization!

I’m excited to share our new preprint on robust stabilization, by Saman Cyrus and me. The title is “Generalized necessary and sufficient robust boundedness results for feedback systems”, and it is now available on arXiv. This work is the culmination of Saman’s PhD and formed the core of his thesis, which he recently successfully defended (!).

Robust stabilization is an old problem, dating back at least 70 years to the seminal work of Lur’e, Zames, and Willems. The basic idea is that a known system G is in feedback with an unknown nonlinearity H constrained to lie in some known set, and we seek sufficient conditions on G that ensure stability of the interconnection. In this work, we looked at the case where the nonlinearity is constrained to lie in a cone (also known as “sector-bounded”). Results of this type include: small-gain, circle, passivity, and extended conicity. It may be possible to extend our ideas to dynamic versions of robust stability (multiplier theory, dissipativity, or integral quadratic constraints), but this is an area for future work.

Continue reading

ACC’20 in Denver (online)

I attended the 2020 American Control Conference, which was supposed to take place in Denver, Colorado, but moved to a 100% online format as a precaution against COVID-19. I normally include a photo of the conference venue with these blog posts, but I don’t think anybody would be interested in seeing my home office setup so I’ll go with no photo this time!

At ACC, I presented a paper with Peter Seiler on the topic of using IQCs for algorithm synthesis. Iterative algorithms like gradient descent can be viewed as robust controllers. In robust control, we have an uncertain dynamical system we would like to control, and we assume the uncertainty is bounded. For example, we could have a mechanical or electrical system consisting of components that are subject to manufacturing tolerances and so we can’t know their exact parameters. Or perhaps their parameters change slowly over time as wear and tear sets in. The goal is to design a feedback controller that works for all systems in this bounded set. This robustness is desirable because if we build 100 identical robots, they won’t actually be identical. Using a robust controller means that we can design a single controller to use in all robots, and it will work even though the robots are not exactly identical. Optimization algorithms are very similar: we want to design an iterative optimization algorithm that works on a variety of different objective functions. The optimization algorithm plays the role of the controller and the objective function plays the role of the uncertain plant. In this paper, we showed how IQC synthesis tools from robust control theory can be used to design robust iterative algorithms.

This year, ACC experimented with a new type of presentation. Rather than the standard 20-minute talk, presenters were given the option to do 3-minute “rapid interactive” talks. I opted for this new format, and since all talks had to be pre-recorded, I’m happy to share with you my 3-minute talk explaining our paper! Here is a link to the video. My short talk was also awarded one of the inaugural “Rapid Interactive People’s Choice Awards”!

Overall, I the conference was very well run and I commend the organizers and staff for getting everything together under such difficult circumstances. I definitely miss the ability to meet with people face-to-face and the spontaneous encounters and discussions that happen at large conferences, but there are also advantages to an online format. There are no logistical/scheduling issues that arise from two talks happening at the same time. And the ability to pause a talk or watch it again later is very nice!