MSMoE05
Compressed Sensing, Extensions and Applications  Part II of III
For Part I, see MSMoD05
For Part III, see MSTuD05
Date: August 10
Time: 16:0018:00
Room: 215
(Note: Click title to show the abstract.)
Organizer:
Kutyniok, Gitta (Technische Universität Berlin)
Holger, Rauhut (RWTH Aachen Univ.)
Abstract: Compressed sensing has seen an enormous research activity in recent years. The key principle is that (approximately) sparse signals can be recovered efficiently from what was previously believed to be vastly incomplete information. For this reason, compressed sensing and its algorithms (often convex optimization approaches) have a large range of applications such as magnetic resonance imaging, radar, wireless communications, and more. Remarkably, all provably optimal measurement schemes are based on randomness and therefore, compressed sensing connects various mathematical fields such as random matrix theory, optimization, approximation theory, and harmonic analysis. Recent developments have extended the theory and its algorithms to the recovery of low rank matrices from incomplete information, to the phaseless estimation problem, and to low tensor recovery. The minisymposium aims at bringing together experts in the field and to provide an overview of its most recent results.
MSMoE051
16:0016:30
Analysis of low rank matrix recovery via Mendelson's small ball method
Terstiege, Ulrich (RWTH Aachen Univ.)
Holger, Rauhut (RWTH Aachen Univ.)
Kabanava, Maryia (RWTH Aachen Univ.)
Abstract: We study low rank matrix recovery from undersampled measurements via nuclear norm minimization.
We aim to recover a matrix X from few linear measurements (Frobenius inner products with measurement matrices).
For different scenarios of independent random measurement matrices we derive bounds for the minimal number of measurements sufficient to uniformly recover any rank r matrix with high probability.
Our results are stable under passing to only approximately low rank matrices and under noisy measurements.
MSMoE052
16:3017:00
Tensor completion in hierarchical tensor formats
Schneider, Reinhold (Inst. for Mathematics)
Abstract:
Hierarchical Tucker tensor format (HT  Hackbusch tensors ) and Tensor Trains
(TT Tyrtyshnikov tensors, I.Oseledets) have been introduced recently for low rank tensor product approximation.
Hierarchical tensor decompositions are based on sub space approximation by extending the Tucker decomposition into a multilevel framework. Therefore they inherit favorable properties of Tucker tensors, e.g
they offer a stable and robust approximation, but still enabling low order scaling with respect to the dimensions.
For many high dimensional problems, hard to be handled so far, this approach
may offer a novel strategy to circumvent the curse of dimensionality.
For uncertainty quantification we cast the original boundary value problem, with uncertain coefficients
problem into a high dimensional parametric boundary value problem, discretized by Galerkin method.
The high dimensional problem is cast into an optimization
problems, constraint by the restriction to tensors of
prescribed ranks $\mathbf{r} $. This problem could be solved by optimization on manifolds, or more simply by alternating least squares.
Since the norm of the underlying energyspace is a cross norm preconditioning is required only for the spatial part and e.g.
performed by standard multi grid approaches, e.g BPX.
Moreover residual based error estimators can be applied to estimate the (total) error of the parameter dependent BVP.
These estimators can be use to balance FEM discretization, chaos polynomial expansion and low rank approximation.
Of importance is, that
this leads to a modification of the orthogonality of the used component tensors.
MSMoE053
17:0017:30
NonLinear \ell_pResidual Minimization in a Greedy Algorithm for Phase Retrieval
Sigl, Juliane (Technical Univ. Munich)
Abstract: Motivated by a very efficient greedy algorithm we introduced recently for solving phase retrieval problems with convergence guarantees, we present a modification to iteratively reweighted least squares to solve nonlinear residual minimizations in \ell_pnorms.
MSMoE054
17:3018:00
On deterministic structured sampling of structured signals in compressed sensing
Adcock, Ben (Simon Fraser Univ.)
Abstract: Recent theoretical developments in CS reveal that in many applications the optimal random sampling strategy depends on the structure of the signal itself. Thus, we are faced with the intriguing problem of designing optimal sampling strategies for classes of signals. However, in tomography problems the sampling patterns are mostly deterministic (although highly structured), yet using standard l1 recovery works very well, but only on certain structured signals. We will discuss a new theory explaining this.
Return
Footnote: Code: TypeDateTimeRoom No.
Type : IL=Invited Lecture, SL=Special Lectures, MS=Minisymposia, IM=Industrial Minisymposia, CP=Contributed Papers, PP=Posters
Date: Mo=Monday, Tu=Tuesday, We=Wednesday, Th=Thursday, Fr=Friday
Time : A=8:309:30, B=10:0011:00, C=11:1012:10, BC=10:0012:10, D=13:3015:30, E=16:0018:00, F=19:0020:00, G=12:1013:30, H=15:3016:00
Room No.: TBA
