Return to Colloquia & Seminar listing
On Geometry of Regularized M-estimation and Structured Model Selection
Mathematics of Data & DecisionsSpeaker: | Dogyoon Song, UC Davis |
Location: | 1025 PDSB |
Start time: | Tue, Feb 25 2025, 3:10PM |
Regularized M-estimation—estimation of model parameters by minimizing a composite objective of loss plus regularizing penalty—-is a widely used framework for learning under high-dimensional or ill-posed conditions. A prime example is the Lasso, which pairs a quadratic loss with an l1-norm penalty to learn high-dimensional linear models under sparsity assumptions. While the Lasso is underpinned by elegant theory and its principles have been extended to other "sparse high-dimensional" settings, conceptual clarity in explaining mechanism often fades beyond the Lasso. For example, existing theories for such extensions typically rely on ad hoc techniques—for instance, controlling "higher-order deviation terms" derived from Lasso proofs—leading to limited insight or suboptimal guarantees. Moreover, to our knowledge, no overarching framework extends cleanly beyond a few specific settings. Meanwhile, recent advances in machine learning have driven a shift in perspective: rather than positing a structure first and then crafting a tailored regularizer, practitioners increasingly choose a loss and regularizer upfront, then exploit the induced structure. In this talk, we present preliminary results from our ongoing research (jointly with Venkat Chandrasekaran) that explores the geometry of regularized M-estimators beyond the Lasso paradigm, focusing on model-selection-consistency in a broad class of convex losses and norm regularizers. We also propose a systematic approach for characterizing structures induced by norm-based regularizers.