Mathematics Colloquia and Seminars

Return to Colloquia & Seminar listing

A Mean Field View of the Landscape of Two-Layers Neural Networks

Mathematics of Data & Decisions

Speaker: Andrea Montanari, Stanford University
Related Webpage: http://maddd.math.ucdavis.edu
Location: 1147 MSB
Start time: Mon, Oct 1 2018, 3:00PM

Multi-layer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires to optimize a non-convex high-dimensional objective (risk function), a problem which is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the first case, does this happen because local minima are absent, or because SGD somehow avoids them? In the second, why do local minima reached by SGD have good generalization properties?
We consider a simple case, namely two-layers neural networks, and prove that -in a suitable scaling limit- SGD dynamics is captured by a certain non-linear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples, and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows to 'average-out' some of the complexities of the landscape of neural networks, and can be used to prove a general convergence result for noisy SGD.
[Based on joint work with Song Mei and Phan-Minh Nguyen]