CSE 788.01 Spring 2012 - Sparse representations and compressed sensing

Instructor: Luis Rademacher.
Lectures: TTh 11:00am-12:18pm, Dreese Labs 480

Question: Today's point-and-shoot cameras collect a large amount of information (say 12 megapixels) but a good fraction of it is immediately discarded as the result is an image file in the lossy JPEG format. Can we collect less information and still get the same output? Yes, provided the camera uses a special mirror that mixes pixels and the image is sparse in the right format, that is, it is a vector with few non-zero entries in the right basis (In the case of JPEG compression, the DCT basis is good for real world pictures).

 In this course we will understand how sparse representations help in reducing collection of data, storage and computation. Among the topics the course will include (tentatively) sparse representations in streaming algorithms (can we keep statistics on network packets with very limited storage?), data structures and machine learning, and compressed sensing, which is a framework that studies how an unknown sparse vector can be recovered with much fewer measurements than the length of the vector (as in the camera example). Among the diverse applications we will discuss some of: faster MRI, matrix completion (the Netflix problem), geometric separation, image segmentation and radar imaging for oil prospecting.

The emphasis will be more on the algorithmic aspects, theory and applications and only as needed on signal processing. 


No lecture on Thu 5/4/2012.

Problem sets:

PS1 (due lecture on April 17th).

Helpful courses/topics (NOT required):

The following topics are helpful but not required, as we will review them near the beginning of the course as needed: linear algebra, linear and nonlinear optimization, signal processing, advanced algorithms.

Office hours:

By instructor, by appointment.


Depending on the number of students and interest, a combination of (some of, but not necessarily all of) homework, paper presentations and working on a research project (definitely no exams).

Precise details on group discussions, critiques, and in-class discussions here.

Recommended readings:

List of papers (Warning: no guarantee that these are the latest versions, search online)

  1. A remark on compressed sensing. Kashin and Temlyakov.
  2. Almost-Euclidean Subspaces of l 1 N\ ell_1^ N via Tensor Products: A Simple Approach to Randomness Reduction Indyk and Szarek.
  3. Euclidean Sections of l 1 N\ ell_1^ N with Sublinear Randomness and Error-Correction over the Reals Guruswami, Lee, Wigderson.
  4. Optimal column-based low-rank matrix reconstruction Guruswami, Sinop.
  5. Near-Optimal Column-Based Matrix Reconstruction Boutsidis, Drineas, Magdon-Ismail.
  6. Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling Adamczak, Litvak, Pajor, Tomczak-Jaegermann
  7. Graph Sparsification by Effective Resistances (D. Spielman, N. Srivastava),  [arXiv] [slides]
  8. Twice-Ramanujan Sparsifiers (J. Batson, D. Spielman, N. Srivastava), [arXiv] [slides] [video]
  9. Reconstruction from anisotropic random measurements Rudelson, Zhou
  10. On the power of adaptivity in sparse recovery Indyk, Prince, Woodruff.
  11. Numerical Linear Algebra in the Streaming Model (full version) pdf Ken Clarkson, David Woodruff.
  12.  Fast Moment Estimation in Data Streams in Optimal Space pdf , Full version on ArXiv 
    (Daniel M. Kane, Jelani Nelson, Ely Porat, David Woodruff) 
  13. Y. Plan, R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach, submitted. [arXiv:1202.1212].
  14. Y. Plan, R. Vershynin, One-bit compressed sensing by linear programming, Communications on Pure and Applied Mathematics, to appear. [arXiv:1109.4299]. See also 1BitCompressiveSensing page at Rice.
  15. Sublinear time, measurement-optimal, sparse recovery for all Ely Porat, Martin Strauss.[PDF] from arxiv.org

    1. Compressive Dual Photography
      Pradeep Sen and Soheil Darabi
      Computer Graphics Forum, Vol. 28, No. 2, March 2009, (Proceedings of Eurographics 2009) [pdf] [bibtex]
    2. Compressive Rendering of Multidimensional Scenes
      Pradeep Sen, Soheil Darabi, Lei Xiao to appear in Lecture Notes in Computer Science, "Video Processing and Computational Video," Eds: D. Cremers, M. Magnor, M. Oswald, L. Zelnik-Manor. Springer. [pdf] [bibtex] 
  16. The Convex Geometry of Linear Inverse Problems
    Venkat Chandrasekaran, Benjamin Recht, Pablo A. Parrilo, and Alan S. Willsky
  17. Simple Bounds for Recovering Low-complexity Models
    Emmanuel Candes, Benjamin Recht
  18. Nuclear norm penalization and optimal rates for noisy low rank matrix completion
    Vladimir Koltchinskii, Alexandre B. Tsybakov, Karim Lounici
  19. Multi-Label Prediction via Compressed Sensing
    Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang
  20. Geometry of log-concave ensembles of random matrices and approximate reconstruction.
  21. Nearly Optimal Sparse Fourier Transform
    Haitham Hassanieh, Piotr Indyk, Dina Katabi, Eric Price
  22. Gossip PCA
    Satish Babu Korada, Andrea Montanari, Sewoong Oh
  23. The Power of Convex Relaxation: Near-Optimal Matrix Completion
    Emmanuel J. Candes, Terence Tao
  24. A. d’Aspremont, F. Bach and L. El Ghaoui. Optimal Solutions for Sparse Principal Component Analysis. Journal of Machine Learning Research, 9(Jul):1269–1294, 2008.
  25. Laurent El Ghaoui, Guan-Cheng Li, Viet-An Duong, Vu Pham, Ashok Srivastava, Kanishka Bhaduri. Sparse Machine Learning Methods for Understanding Large Text Corpora. Accepted for publication in Proc. Conference on Intelligent Data Understanding, 2011.
  26. Youwei Zhang and Laurent El Ghaoui. Large-Scale Sparse Principal Component Analysis with Application to Text Data.

Schedule (tentative):