Results 1  10
of
60
Exact Matrix Completion via Convex Optimization
, 2008
"... We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfe ..."
Abstract

Cited by 837 (26 self)
 Add to MetaCart
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most lowrank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m ≥ C n 1.2 r log n for some positive numerical constant C, then with very high probability, most n × n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
Sparsity and Incoherence in Compressive Sampling
, 2006
"... We consider the problem of reconstructing a sparse signal x 0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux 0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x 0 exactly when the number of measurements exceeds m ≥ Const · µ 2 (U) ..."
Abstract

Cited by 225 (14 self)
 Add to MetaCart
We consider the problem of reconstructing a sparse signal x 0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux 0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x 0 exactly when the number of measurements exceeds m ≥ Const · µ 2 (U) · S · log n, where S is the number of nonzero components in x 0, and µ is the largest entry in U properly normalized: µ(U) = √ n · maxk,j Uk,j. The smaller µ, the fewer samples needed. The result holds for “most ” sparse signals x 0 supported on a fixed (but arbitrary) set T. Given T, if the sign of x 0 for each nonzero entry on T and the observed values of Ux 0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples.
Compressive Sensing and Structured Random Matrices
 RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Abstract

Cited by 160 (19 self)
 Add to MetaCart
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using ℓ1minimization.
Stability results for random sampling of sparse trigonometric polynomials
, 2006
"... Recently, it has been observed that a sparse trigonometric polynomial, i.e. having only a small number of nonzero coefficients, can be reconstructed exactly from a small number of random samples using Basis Pursuit (BP) and Orthogonal Matching Pursuit (OMP). In the present article it is shown that ..."
Abstract

Cited by 63 (17 self)
 Add to MetaCart
(Show Context)
Recently, it has been observed that a sparse trigonometric polynomial, i.e. having only a small number of nonzero coefficients, can be reconstructed exactly from a small number of random samples using Basis Pursuit (BP) and Orthogonal Matching Pursuit (OMP). In the present article it is shown that recovery both by a BP variant and by OMP is stable under perturbation of the samples values by noise. For BP in addition, the stability result is extended to (nonsparse) trigonometric polynomials that can be wellapproximated by sparse ones. The theoretical findings are illustrated by numerical experiments. Key Words: random sampling, trigonometric polynomials, Orthogonal Matching Pursuit, Basis Pursuit, compressed sensing, stability under noise, fast Fourier transform, nonequispaced
Quantitative estimates of the convergence of the empirical covariance matrix in logconcave ensembles
 J. Amer. Math. Soc
"... Let K be an isotropic convex body in Rn. Given ε> 0, how many independent points Xi uniformly distributed on K are needed for the empirical covariance matrix to approximate the identity up to ε with overwhelming probability? Our paper answers this question from [12]. More precisely, let X ∈ Rn b ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
(Show Context)
Let K be an isotropic convex body in Rn. Given ε> 0, how many independent points Xi uniformly distributed on K are needed for the empirical covariance matrix to approximate the identity up to ε with overwhelming probability? Our paper answers this question from [12]. More precisely, let X ∈ Rn be a centered random vector with a logconcave distribution and with the identity as covariance matrix. An example of such a vector X is a random point in an isotropic convex body. We show that for any ε> 0, there exists C(ε)> 0, such that if N ∼ C(ε)n and (Xi)i≤N are i.i.d. copies of X, then ∥∥ ∥ 1N ∑Ni=1Xi ⊗ Xi − Id∥∥ ∥ ≤ ε, with probability larger than 1 − exp(−c√n).
A tail inequality for suprema of unbounded empirical processes with applications to Markov chains
, 2008
"... ..."
Nonparametric adaptive estimation for pure jump Lévy processes
, 2008
"... Abstract. This paper is concerned with nonparametric estimation of the Lévy density of a pure jump Lévy process. The sample path is observed at n discrete instants with fixed sampling interval. We construct a collection of estimators obtained by deconvolution methods and deduced from appropriate est ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Abstract. This paper is concerned with nonparametric estimation of the Lévy density of a pure jump Lévy process. The sample path is observed at n discrete instants with fixed sampling interval. We construct a collection of estimators obtained by deconvolution methods and deduced from appropriate estimators of the characteristic function and its first derivative. We obtain a bound for the L 2risk, under general assumptions on the model. Then we propose a penalty function that allows to build an adaptive estimator. The risk bound for the adaptive estimator is obtained under additional assumptions on the Lévy density. Examples of models fitting in our framework are described and rates of convergence of the estimator are discussed. June 20, 2008
On concentration of selfbounding functions
, 2009
"... We prove some new concentration inequalities for selfbounding functions using the entropy method. As an application, we recover Talagrand’s convex distance inequality. The new Bernsteinlike inequalities for selfbounding functions are derived thanks to a careful analysis of the socalled Herbst ar ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We prove some new concentration inequalities for selfbounding functions using the entropy method. As an application, we recover Talagrand’s convex distance inequality. The new Bernsteinlike inequalities for selfbounding functions are derived thanks to a careful analysis of the socalled Herbst argument. The latter involves comparison results between solutions of differential inequalities that may be interesting in their own right.
Adaptive Estimation of a Distribution Function and its Density in SupNorm Loss by Wavelet and Spline Projections
, 2008
"... Given an i.i.d. sample from a distribution F on R with uniformly continuous density p0, purelydata driven estimators are constructed that efficiently estimate F in supnorm loss, and simultaneously estimate p0 at the best possible rate of convergence over Hölder balls, also in supnorm loss. The es ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
Given an i.i.d. sample from a distribution F on R with uniformly continuous density p0, purelydata driven estimators are constructed that efficiently estimate F in supnorm loss, and simultaneously estimate p0 at the best possible rate of convergence over Hölder balls, also in supnorm loss. The estimators are obtained from applying a model selection procedure close to Lepski’s method with random thresholds to projections of the empirical measure onto spaces spanned by wavelets or Bsplines. Explicit constants in the asymptotic risk of the estimator are obtained, as well as oracletype inequalities in supnorm loss. The random thresholds are based on suprema of Rademacher processes indexed by wavelet or spline projection kernels. This requires Bernsteinanalogues of the inequalities in Koltchinskii (2006) for the deviation of suprema of empirical processes from their Rademacher symmetrizations.