- This event has passed.
ESE Fall Seminar – “Acceleration by Stepsize Hedging”
October 24, 2023 at 11:00 AM - 12:00 PM
Can we accelerate convergence of gradient descent without changing the algorithm — just by optimizing stepsizes? Surprisingly, we show that the answer is yes. Our proposed Silver Stepsize Schedule optimizes strongly convex functions in $k^{\log_p 2} = k^{0.7864}$ iterations, where $p=1+\sqrt{2}$ is the silver ratio and $k$ is the condition number. This is intermediate between the textbook unaccelerated rate $k$ and the accelerated rate $\sqrt{k}$ due to Nesterov in 1983. The non-strongly convex setting is conceptually identical, and we obtain an analogous accelerated rate $\eps^{-\log_p 2} = \eps^{-0.7864}$. We conjecture and provide partial evidence that these rates are optimal among all possible stepsize schedules.
The Silver Stepsize Schedule is an explicit non-monotonic fractal. Why should such stepsizes help? The core intuition is “hedging” between individually suboptimal strategies — short steps and long steps — since bad cases for the former are good cases for the latter, and vice versa. Properly combining these stepsizes yields faster convergence due to the misalignment of worst-case functions. This talk is based on joint work with Pablo Parrilo that publishes and extends my 2018 Master’s Thesis — which established for the first time that judiciously chosen stepsizes can enable accelerated convex optimization. Prior to this thesis, the only such result was for the special case of quadratics, due to Young in 1953.
Jason Altschuler
Assistant Professor of Statistics and Data Science, UPenn Wharton
Jason Altschuler is an Assistant Professor at UPenn in the Department of Statistics and Data Science, with a secondary appointment in the Department of Computer Science. Previously, he received his undergrad degree from Princeton and his PhD from MIT, where he received the Sprowls Award for the best MIT thesis in AI & Decision Making. His research is at the interface of optimization, probability, and machine learning, with a focus on the design and analysis of large-scale optimization algorithms.