Tutorial on Compressed Sensing and Sparse Recovery, by Prof. Dr. Massimo Fornasier. A part of the Machine Learning seminar series.

As a part of the Machine Learning Seminar Series, we have the pleasure to announce the next tutorial on Compressed Sensing and Sparse Recovery to be delivered by Prof. Dr. Massimo Fornasier on October 04 and 05, 10:00-12:00, in Storstua at Simula Research Laboratory.

Dr. Fornasier holds a chair in Applied Numerical Analysis at the Technical University of Munich, Germany. The research of Dr. Fornasier and his Unit embraces a broad spectrum of problems in mathematical modeling, analysis and numerical analysis. He is particularly interested in the concept of compression as appearing in different forms in data analysis, image and signal processing, and in the adaptive numerical solutions of partial differential equations or high-dimensional optimization problems.

Massimo Fornasier is a member of VQR, a panel responsible for the evaluation of the quality of research in Italy. He is also a member of the editorial boards of Networks and Heterogeneous Media, Journal of Fourier Analysis and Applications and Calcolo.

Dr. Fornasier will additionally give a 1-hour lecture on his current ERC-based research entitled Learning and Sparse Control of Multiagent Systems on October 04, 13:00-14:00, also in Storstua.

We are very much looking to welcoming you to the seminar!

Abstract

The first part of this course addresses the basics of the theory of Compressed Sensing, which describes mathematically the nonadaptive compressed acquisition of data. It is a new paradigm of compression, where data are  nonadaptively acquired at the lowest possible rate by a universal random sampler and can be nevertheless almost losslessly restored with polynomial complexity under the assumptions that the data can be represented in terms of a vector of sparse coordinates  with respect to some basis. From a mathematical point of view, the encoding/sampling of the data is modeled linearly by the application of a suitable random matrix to the data. Such random matrices need to possess with high probability special spectral properties, which we call the Null Space Property (NSP) or the Restricted Isometry Property (RIP). The near lossless decoding procedure is realized by solving certain convex linear programs involving the $\ell_1$-norm minimization of possible feasible competitors.

In the second part of the course we present several alternative decoding algorithms, which are very simple to implement, are very robust also for decoding from noisy data, and are rather efficient when it comes to real-life applications on large data sets. We explore first a very powerful algorithm called the iteratively re-weighted least squares, which is designed to strive towards the $\ell_1$-norm minimization by solving a sequence of quadratic programs. We show also generalizations of this algorithm for the solution of low-rank matrix completions, inspired by recommended system problems. The second algorithm we present is the iterative hard-thresholding, which performs a greedy search of the most relevant coordinates of feasible solutions. It is a first order algorithm, extremely easy to implement, and highly scalable with the dimensionality, especially if one can take advantage of fast matrix-vector multiplications. All these algorithms are efficient and work well in practice only in the compressed sensing paradigm, where the NSP or the RIP are needed for the linear encoder. For all those problems where such properties of the encoder are not available, then one is introduced to the use of another algorithm, the iterative soft-thresholding algorithm, which will always compute a reasonable solution.

04/ Oct 2016 10.0012.00 05/ Oct 2016 12.00