Luis Rademacher: Research overview

Curriculum vitae

A more detailed research statement

Introduction

This page gives an overview of my research in the last few years. The overview includes links to representative publications and a brief commentary on them.

My research interests are in theoretical computer science and some related areas, from pure mathematics to applications. I am primarily interested in the foundations of data science and artificial intelligence and this has lead to a focus on problems in convex geometry, machine learning, matrix computations and optimization. The papers below illustrate my work on asymptotic geometric analysis, random polytopes and the slicing problem [1,2], expander graphs and graph sparsification [3], low rank matrix approximation [4], and efficient high dimensional statistical inference [5,6].

Selected Papers

  1. On the monotonicity of the expected volume of a random simplex.
    Mathematika, 2012.

    Motivated by the slicing problem in asymptotic geometric analysis, this paper solved the following (formerly open) problem: Suppose that a d-dimensional convex body K is contained in another convex body L. Let a random simplex be the convex hull of \(d+1\) points from a given domain. Is it true that the expected volume of a random simplex from \(K\) is smaller than the expected volume of a random simplex from \(L\)? The paper shows that the answer is dimension-dependent: no for \(d\geq 4\) and yes for \(d \leq 2 \) (and gives evidence that the answer is no for \(d=3\)).

  2. A simplicial polytope that maximizes the isotropic constant must be a simplex.
    Mathematika, 2015.

  3. This paper is also motivated by the slicing problem mentioned above. One formulation of the problem is the following: For a \(d\)-dimensional convex body \(K\), define the isotropic constant by \( L_K^{2d} = \det(A(K))/(\mathrm{vol}(K))^2\), where \(A(K)\) is the covariance matrix of the uniform distribution on \(K\). The slicing problem asks whether there is a universal constant upper bound to \(L_K\). This paper studies the structure of maximizers of the isotropic constant. It shows that a simplicial polytope that maximizes the isotropic constant can only be a simplex. This gives support to the known conjecture that the maximizer over all convex bodies is indeed the simplex.

  4. Expanders via random spanning trees. (with Alan Frieze, Navin Goyal, Santosh Vempala)
    SIAM Journal on Computing, 2014.

    This paper shows that the union of two random spanning trees from the complete graph is an expander graph. It also studies systematically graph sparsification properties of the union of random spanning trees for bounded degree graphs. Similar results are proven for the closely related model of a random subgraph where every node picks a random incident edge.
     
  5. Efficient volume sampling for row/column subset selection. (with Amit Deshpande)
    FOCS 2010.

    This paper studies the problem of approximating a matrix by another matrix whose rows lie in the span of a few rows of the original matrix. To this end it introduces an efficient randomized algorithm with optimal approximation guarantees. Somewhat unexpectedly, it efficiently derandomizes the algorithm via the method of conditional expectations. A fast version of the algorithm is described using random projections.
     
  6. Heavy-tailed Independent Component Analysis. (with Joseph Anderson, Anupama Nandi, Navin Goyal)
    FOCS 2015.

    This paper introduces the first (as far as I know) algorithmic use of the centroid body, a convex body, going back to the work of Petty in convex geometry, that one can associate to any multivariate distribution with finite first moment. This body can play the role of the covariance matrix for some applications to data analysis with heavy-tails, such as financial data. In this paper, the centroid body is used to provide a provably correct algorithm for a standard problem, Independent Component Analysis (ICA), previously not known to be efficiently solvable without higher moments. The proposed ideas are not just of theoretical interest: the companion paper "Heavy-tailed analogues of the covariance matrix for ICA" discusses a  practical implementation of them that outperforms the best previously known algorithms in many heavy-tailed regimes.

  7. The more, the merrier: the blessing of dimensionality for learning large Gaussian mixtures. (with Joseph Anderson, Mikhail Belkin, Navin Goyal, James Voss)
    COLT 2014.

    This paper shows a somewhat unusual "blessing of dimensionality" for a high-dimensional statistical inference problem. This problem is the estimation of parameters of a high-dimensional Gaussian Mixture Model: a weighted sum of Gaussian densities. The paper shows an efficient algorithm to estimate the means and weights when the covariances are identical and known, assuming that a certain conditioning parameter of the mixture is well-behaved. It also shows that the conditioning parameter is generically well-behaved, in the smoothed analysis sense. Finally, it completes the picture of the "blessing of dimensionality " by showing that no such generic efficiency is possible in low dimension.