Results 1  10
of
20
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Minimax rates of estimation for highdimensional linear regression over balls
, 2009
"... Abstract—Consider the highdimensional linear regression model,where is an observation vector, is a design matrix with, is an unknown regression vector, and is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating in eitherloss andprediction loss, assuming tha ..."
Abstract

Cited by 104 (23 self)
 Add to MetaCart
(Show Context)
Abstract—Consider the highdimensional linear regression model,where is an observation vector, is a design matrix with, is an unknown regression vector, and is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating in eitherloss andprediction loss, assuming that belongs to anball for some.Itisshown that under suitable regularity conditions on the design matrix, the minimax optimal rate inloss andprediction loss scales as. The analysis in this paper reveals that conditions on the design matrix enter into the rates forerror andprediction error in complementary ways in the upper and lower bounds. Our proofs of the lower bounds are information theoretic in nature, based on Fano’s inequality and results on the metric entropy of the balls, whereas our proofs of the upper bounds are constructive, involving direct analysis of least squares overballs. For the special case, corresponding to models with an exact sparsity constraint, our results show that although computationally efficientbased methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix than optimal algorithms involving leastsquares over theball. Index Terms—Compressed sensing, minimax techniques, regression analysis. I.
The Gelfand widths of ℓpballs for 0 < p ≤ 1
 J. Complexity
"... We provide sharp lower and upper bounds for the Gelfand widths of ℓpballs in the Ndimensional ℓ N qspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area. ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
(Show Context)
We provide sharp lower and upper bounds for the Gelfand widths of ℓpballs in the Ndimensional ℓ N qspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area.
The Gelfand widths of `pballs for 0 < p ≤ 1
 J. Complexity
"... We provide sharp lower and upper bounds for the Gelfand widths of `pballs in the Ndimensional `Nqspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area. ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
We provide sharp lower and upper bounds for the Gelfand widths of `pballs in the Ndimensional `Nqspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area.
Encoding the ℓp Ball from Limited Measurements
"... We address the problem of encoding signals which are sparse, i.e. signals that are concentrated on a set of small support. Mathematically, such signals are modeled as elements in the ℓp ball for some p ≤ 1. We describe a strategy for encoding elements of the ℓp ball which is universal in that 1) the ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We address the problem of encoding signals which are sparse, i.e. signals that are concentrated on a set of small support. Mathematically, such signals are modeled as elements in the ℓp ball for some p ≤ 1. We describe a strategy for encoding elements of the ℓp ball which is universal in that 1) the encoding procedure is completely generic, and does not depend on p (the sparsity of the signal), and 2) it achieves nearoptimal minimax performance simultaneously for all p<1. What makes our coding procedure unique is that it requires only a limited number of nonadaptive measurements of the underlying sparse signal; we show that nearoptimal performance can be obtained with a number of measurements that is roughly proportional to the number of bits used by the encoder. We end by briefly discussing these results in the context of image compression. 1
Local privacy and minimax bounds: Sharp rates for probability estimation
 In NIPS
"... We provide a detailed study of the estimation of probability distributions— discrete and continuous—in a stringent setting in which data is kept private even from the statistician. We give sharp minimax rates of convergence for estimation in these locally private settings, exhibiting fundamental tra ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We provide a detailed study of the estimation of probability distributions— discrete and continuous—in a stringent setting in which data is kept private even from the statistician. We give sharp minimax rates of convergence for estimation in these locally private settings, exhibiting fundamental tradeoffs between privacy and convergence rate, as well as providing tools to allow movement along the privacystatistical efficiency continuum. One of the consequences of our results is that Warner’s classical work on randomized response is an optimal way to perform survey sampling while maintaining privacy of the respondents. 1
Nearly optimal signal recovery from random projections: Universal encoding strategies?
 IEEE TRANS. INFO. THEORY
, 2006
"... Suppose we are given a vector f in a class F, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Suppose we are given a vector f in a class F, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector jfj (or of its coefficients in a fixed basis) obeys jfj(n) R 1 n01=p, where R>0 and p>0. Suppose that we take measurements yk = hf; Xki;k =1;...;K, where the Xk are Ndimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0 < p < 1 and with overwhelming probability, our reconstruction f] , defined as the solution to the constraints
regression with additive sparsity
"... bounds on minimax rates for nonparametric ..."
(Show Context)
Entropy and sampling numbers of classes of ridge functions
, 2014
"... We study properties of ridge functions f(x) = g(a·x) in high dimensions d from the viewpoint of approximation theory. The considered function classes consist of ridge functions such that the profile g is a member of a univariate Lipschitz class with smoothness α> 0 (including infinite smoothness ..."
Abstract
 Add to MetaCart
(Show Context)
We study properties of ridge functions f(x) = g(a·x) in high dimensions d from the viewpoint of approximation theory. The considered function classes consist of ridge functions such that the profile g is a member of a univariate Lipschitz class with smoothness α> 0 (including infinite smoothness), and the ridge direction a has pnorm ‖a‖p ≤ 1. First, we investigate entropy numbers in order to quantify the compactness of these ridge function classes in L∞. We show that they are essentially as compact as the class of univariate Lipschitz functions. Second, we examine sampling numbers and face two extreme cases. In case p = 2, sampling ridge functions on the Euclidean unit ball suffers from the curse of dimensionality. Moreover, it is as difficult as sampling general multivariate Lipschitz functions, which is in sharp contrast to the result on entropy numbers. When we additionally assume that all feasible profiles have a first derivative uniformly bounded away from zero in the origin, then the complexity of sampling ridge functions reduces drastically to the complexity of sampling univariate Lipschitz functions. In between, the sampling problem’s degree of difficulty varies, depending on the values of α and p. Surprisingly, we see almost the entire hierarchy of tractability levels as introduced in the recent monographs by Novak and Woźniakowski. Keywords ridge functions · sampling numbers · entropy numbers · rate of convergence · information based complexity · curse of dimensionality
highdimensional linear regression over ℓqballs
"... Minimax rates of estimation for ..."
(Show Context)