Mathematics Colloquia and Seminars

Return to Colloquia & Seminar listing

Escaping Saddle-Point Faster under Interpolation-like Conditions

Mathematics of Data & Decisions

Speaker: Abhishek Roy, UC Davis (Statistics)
Location: Zoom Lecture
Start time: Tue, Oct 6 2020, 4:10PM

Abstract: In this work, we show that under over-parametrization several standard stochastic optimization algorithms escape saddle-points and converge to local-minimizers much faster. One of the fundamental aspects of over-parametrized models is that they are capable of perfectly interpolating the training data. We show that, under interpolation-like assumptions satisfied by the stochastic gradients under over-parametrization, the first-order oracle complexity of Perturbed Stochastic Gradient Descent (PSGD) algorithm to reach an $\epsilon$-local-minimizer, matches the corresponding deterministic rate of $\epsilon^{-2}$. We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions. The first and second-order oracle complexities of SCRN to reach an -local-minimizer turns out to be $\epsilon^{-2.5}$. The obtained complexity is better than the corresponding complexity of either PSGD or SCRN without interpolation-like assumptions. However, it does not match the rate of $\epsilon^{-1.5}$corresponding to deterministic Cubic-Regularized Newton method. We also discuss the corresponding improved complexities in the zeroth-order settings.



zoom info available https://sites.google.com/view/maddd After the talk, we will do virtual tea/coffee get-together at https://gather.town/KOoFj0aKT5GkEj40/Alder-Room