1300 Nicollet Mall

Minneapolis, Minnesota, USA 55403

(specific time and locations

of sessions are below)

Low-rank and sparse modeling are two of the most ubiquitous paradigms of data analysis. Low-rank modeling reduces the dimension of the data, whereas sparse modeling reduces the description of the data by selecting a few features from a large dictionary. These two paradigms can be combined, e.g., when modeling data with a few low-rank models or with a single low-rank model having sparse residual. The goal of this workshop is to discuss some of the very recent and exciting developments of such modeling and highlight fundamental mathematical theories related to these explorations. Furthermore, the workshop will discuss interesting areas of applications for these developments.

Gilad Lerman, University of Minnesota, Twin Cities

The official program for the minisymposium is online: MS50 (session I), M63 (session II) and MS79 (session III)

Room: Greenway C - Level 2

**10:30-10:55 Robust image recovery via total variation minimization
Rachel Ward**, University of Texas at Austin, USA; Deanna Needell, Claremont McKenna College, USA

Slides of Talk

**11:00-11:25 Fast global convergence of gradient methods for high-dimensional statistical recovery
Sahand Negahban**, Massachusetts Institute of Technology, USA

These generic results are applicable to a wide variety of problems. This work considers deconvolving two sparse vectors, analyzes a spread-spectrum coding scheme for impulsive noise, and shows when it is possible to deconvolve a low-rank matrix corrupted with a special type of noise. As an additional benefit, this analysis recovers, and extends, known weak and strong thresholds for the basis pursuit problem.

Jarvis Haupt

Room: Greenway C - Level 2

**4:00-4:25 ****Performance limits of non-local means**

Ery Arias-Castro, University of California, San Diego, USA; Joseph Salmon and** Rebecca Willett**, Duke University, USA

**Abstract:** In this talk I describe a novel theoretical characterization of the performance of non-local means (NLM) for noise removal. NLM has proven effective in a variety of empirical studies, but little is understood fundamentally about how it performs relative to classical methods based on wavelets or how various parameters (e.g., patch size) should be chosen. For cartoon images and images which may contain thin features and regular textures, the error decay rates of NLM are derived and compared with those of linear filtering, oracle estimators, variable-bandwidth kernel methods, Yaroslavsky’s filter and wavelet thresholding estimators. The trade-off between global and local search for matching patches is examined, and the bias reduction associated with the local polynomial regression version of NLM is analyzed. The theoretical results are validated via simulations for 2D images corrupted by additive white Gaussian noise. This is joint work with Ery Arias-Castro and Joseph Salmon.

**4:30-4:55 A Novel M-Estimator for Robust PCA**

**Teng Zhang**, University of Minnesota, USA; Gilad Lerman, University of Minnesota, USA

**Abstract:** We formulate a convex minimization to robustly recover a subspace from a contaminated data set, partially sampled around it, and propose a fast iterative algorithm to achieve the corresponding minimum. We establish exact recovery by this minimizer, quantify the effect of noise and regularization, explain how to take advantage of a known intrinsic dimension and establish linear convergence of the iterative algorithm. We compare our method with many other algorithms for Robust PCA on synthetic and real data sets and demonstrate state-of-the-art speed and accuracy.

Slides of Talk

** 5:00-5:25 Compressive Principal Component Pursuit**

**Yi Ma**, University of Illinois at Urbana-Champaign, USA; John Wright, Columbia University, USA

**Abstract:** We consider the problem of recovering target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank matrix recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing of superpositions of structured signals.

Slides of Talk

**
5:30-5:55 Robust locally linear analysis with applications to image denoising and blind inpainting**

Room: Greenway C - Level 2

**10:30-10:55 Poisson tensor factorization for sparse count data
Tamara G. Kolda**, Sandia National Laboratories, USA; Eric Chi, University of California, Los Angeles, USA

Artin Armagan, SAS Institute, Inc., USA; Larry Carin, David Dunson, Rayan Saab and

**Abstract:** Previous theoretical analysis of Bayesian Compressed Sensing has focused purely upon the consistency behavior of MAP estimators. In this talk, we shall demonstrate that much stronger statements can be made about the posterior in Bayesian Compressed Sensing, which can be useful for uncertainty quantification. Leveraging metric geometry and nonparametric Bayesian techniques, we exhibit universal finite undersampling concentration bounds of Bayesian posteriors for linear regression. The form of these bounds makes it clear that only priors with sufficient concentration on sparse signals enjoy strong contraction around the true parameter in this regime. Based upon these bounds, we also obtain asymptotic contraction rates for well-structured priors.

**11:30-11:55 Clustering and Embedding of High-Dimensional Multi-Manifold Data using Sparse Representation**

**Ehsan Elhamifar** and Rene Vidal, Johns Hopkins University, USA

**Abstract:** In many problems across different areas of science and engineering, we are dealing with massive collections of high-dimensional data. Having thousands and millions of dimensions, the data often lie in or close to low-dimensional structures, called manifolds. Separating the data into the underlying low-dimensional models and finding compact representations for them is one of the most fundamental problems in the high-dimensional data analysis. We address the problem of clustering and embedding of the data on multiple manifolds using sparse representation techniques. We use the collection of the data points as a self-expressive dictionary in which each point can be efficiently represented as a sparse combination of a few other points from the same manifold with appropriate weights that encode locality information. We propose a convex optimization program whose solution is used in a spectral graph theory framework to infer the clustering and embedding of the data. When the manifolds correspond to low-dimensional subspaces, we show that under appropriate conditions on the principal angles between the subspaces and the distribution of the data, the solution of the proposed convex program recovers the desired solution. We demonstrate the effectiveness of the proposed algorithm through experiments on several real-world problems.

Slides of Talk

**12:00-12:25 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies**

Morteza Mardani, Gonzalo Mateous and Georgios B. Giannakis, University of Minnesota, USA

**Abstract:** Given the superposition of a low-rank matrix plus the product of a known fat compression matrix

times a sparse matrix, this work establishes deterministic conditions under which exact

recovery of the low-rank and sparse components becomes possible. This fundamental identifiability issue

arises with traffic anomaly detection in backbone networks, and subsumes compressed sensing as well

as the timely low-rank plus sparse matrix recovery tasks encountered in matrix decomposition problems.

For further information on this event, please email Gilad Lerman at
`lerman(at)umn.edu`.