Mathematics Colloquia and Seminars

Return to Colloquia & Seminar listing

Sharp global convergence guarantees for iterative nonconvex optimization with random data

Probability

Speaker: Kabir Chandrasekher, Stanford
Location: MSB 2112
Start time: Tue, Oct 25 2022, 1:10PM

Iterative algorithms are the workhorses of modern statistical learning, and are widely used to fit large-scale, complex models to random data. While the choice of an algorithm and its hyperparameters determines both the speed and fidelity of the learning pipeline, it is common for this choice to be made heuristically, either by expensive trial-and-error or by comparing rough bounds on convergence rates of various candidate algorithms. Motivated by this, we develop a principled framework that produces sharp, iterate-by-iterate characterizations of solution quality for algorithms run with sample-splitting on a wide range of nonconvex model-fitting problems with random data. I will present the general framework and use it to showcase several concrete consequences for parameter estimation in some popular statistical models, covering both higher-order algorithms based on alternating updates and first-order algorithms based on subgradient descent.