Return to Colloquia & Seminar listing
Random Matrix Theory and Modern Machine Learning
Probability| Speaker: | Michael W. Mahoney, UC Berkeley |
| Location: | 2112 MSB |
| Start time: | Tue, Jan 6 2026, 1:10PM |
Random matrix theory is a large area with a long history, with elegant theory and a wide range of applications. However, the challenges of modern machine learning are forcing us not only to use random matrix theory in new ways, and also to chart out new directions for theory. In this series of presentations, we'll cover several aspects of these developments. This includes challenges in training machine learning models, inference in overparameterized models, diagnostics where heavy-tailed distributions are ubiquitous, and computational-statistical tradeoffs in randomized numerical linear algebra. Addressing these challenges leads to new directions for theory: phenomenology and semi-empirical theory to characterize performance in state-of-the-art neural networks without access to training or testing data; high-dimensional linearizations and deterministic equivalents to go beyond eigenvalue distributions of linear models; very sparse embeddings to perform "algorithmic gaussianization" to speed up core numerical linear algebra problems; new random matrix models that have heavy-tailed spectral structure without having heavy-tailed elements; and using "free compression" ideas in reverse to compute high-quality spectral distributions of so-called impalpable matrices (for which we cannot form or even evaluate with full matrix-vector products).
