Results 1  10
of
14
Informationtheoretic limits on sparsity recovery in the highdimensional and noisy setting
, 2007
"... Abstract—The problem of sparsity pattern or support set recovery refers to estimating the set of nonzero coefficients of an un3 p known vector 2 based on a set of n noisy observations. It arises in a variety of settings, including subset selection in regression, graphical model selection, signal de ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
Abstract—The problem of sparsity pattern or support set recovery refers to estimating the set of nonzero coefficients of an un3 p known vector 2 based on a set of n noisy observations. It arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. The sample complexity of a given method for subset recovery refers to the scaling of the required sample size n as a function of the signal dimension p, sparsity index k (number of nonzeroes in 3), as well as the minimum value min of 3 over its support and other parameters of measurement matrix. This paper studies the informationtheoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on random measurement matrices drawn from general Gaussian measurement matrices, we derive both a set of sufficient conditions for exact support recovery using an exhaustive search decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for exact support recovery. This analysis of fundamental limits complements our previous work on sharp thresholds for support set recovery over the same set of random measurement ensembles using the polynomialtime Lasso method (`1constrained quadratic programming). Index Terms—Compressed sensing, `1relaxation, Fano’s method, highdimensional statistical inference, informationtheoretic
Oracle Inequalities for Inverse Problems
, 2000
"... We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a finite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f . Under general conditions, ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a finite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f . Under general conditions, we show that this can be achieved by simple minimization of unbiased risk estimator, provided the singular values of the operator of the inverse problem decrease as a power law. The main result is a nonasymptotic oracle inequality that is shown to be asymptotically exact. This inequality can be also used to obtain sharp minimax adaptive results. In particular, we apply it to show that minimax adaptation on ellipsoids in multivariate anisotropic case is realized by minimization of unbiased risk estimator without any loss of efficiency with respect to optimal nonadaptive procedures. Mathematics Subject Classifications: 62G05, 62G20 Key Words: Statistical inverse problems, Oracle inequaliti...
High dimensional analysis of semidefinite relaxations for sparse principal component analysis
, 2008
"... Principal component analysis (PCA) is a classical method for dimensionality reduction based on extracting the dominant eigenvectors of the sample covariance matrix. However, PCA is well known to behave poorly in the “large p, small n ” setting, in which the problem dimension p is comparable to or la ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
Principal component analysis (PCA) is a classical method for dimensionality reduction based on extracting the dominant eigenvectors of the sample covariance matrix. However, PCA is well known to behave poorly in the “large p, small n ” setting, in which the problem dimension p is comparable to or larger than the sample size n. This paper studies PCA in this highdimensional regime, but under the additional assumption that the maximal eigenvector is sparse, say with at most k nonzero components. We analyze two computationally tractable methods for recovering the support of this maximal eigenvector: (a) a simple diagonal cutoff method, which transitions from success to failure as a function of the order parameter θdia(n, p, k) = n/[k 2 log(p − k)]; and (b) a more sophisticated semidefinite programming (SDP) relaxation, which succeeds once the order parameter θsdp(n, p, k) = n/[k log(p − k)] is larger than a critical threshold. Our results thus highlight an interesting tradeoff between computational and statistical efficiency in highdimensional inference.
TESTING CONVEX HYPOTHESES ON THE MEAN OF A GAUSSIAN VECTOR. APPLICATION TO TESTING QUALITATIVE HYPOTHESES ON A REGRESSION FUNCTION
, 2005
"... In this paper we propose a general methodology, based on multiple testing, for testing that the mean of a Gaussian vector in R n belongs to a convex set. We show that the test achieves its nominal level, and characterize a class of vectors over which the tests achieve a prescribed power. In the func ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In this paper we propose a general methodology, based on multiple testing, for testing that the mean of a Gaussian vector in R n belongs to a convex set. We show that the test achieves its nominal level, and characterize a class of vectors over which the tests achieve a prescribed power. In the functional regression model this general methodology is applied to test some qualitative hypotheses on the regression function. For example, we test that the regression function is positive, increasing, convex, or more generally, satisfies a differential inequality. Uniform separation rates over classes of smooth functions are established and a comparison with other results in the literature is provided. A simulation study evaluates some of the procedures for testing monotonicity. 1. Introduction. 1.1. The statistical framework. We consider the following regression model: Yi = F(xi) + σεi,
Informationtheoretic limits on sparse support recovery: Dense versus sparse measurements
, 2008
"... We study the informationtheoretic limits of exactly recovering the support of a sparse signal using noisy projections defined by various classes of measurement matrices. Our analysis is highdimensional in nature, in which the number of
observations n, the ambient signal dimension p, and the signal ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We study the informationtheoretic limits of exactly recovering the support of a sparse signal using noisy projections defined by various classes of measurement matrices. Our analysis is highdimensional in nature, in which the number of
observations n, the ambient signal dimension p, and the signal
sparsity k are all allowed to tend to infinity in a general manner. This paper makes two novel contributions. First, we provide sharper necessary conditions for exact support recovery using general (nonGaussian) dense measurement matrices. Combined with previously known sufficient conditions, this result yields a sharp characterization of when the optimal decoder can recover a signal with linear sparsity (k = Θ(p)) using a linear scaling of observations (n = Θ(p)) in the presence of noise. Our second contribution is to prove necessary conditions on the number
of observations n required for asymptotically reliable recovery using a class of γsparsified measurement matrices, where the measurement sparsity γ(n, p, k) G (0, 1] corresponds to the fraction of nonzero entries per row. Our analysis allows general scaling of the quadruplet (n, p, k, γ), and reveals three different regimes, corresponding to whether measurement sparsity has no effect, a minor effect, or a dramatic effect on the informationtheoretic limits of the subset recovery problem.
(1) CONFIDENCE BALLS IN GAUSSIAN REGRESSION
, 2004
"... Starting from the observation of an R nGaussian vector of mean f and covariance matrix σ 2 In (In is the identity matrix), we propose a method for building a Euclidean confidence ball around f, with prescribed probability of coverage. For each n, we describe its nonasymptotic property and show its ..."
Abstract
 Add to MetaCart
Starting from the observation of an R nGaussian vector of mean f and covariance matrix σ 2 In (In is the identity matrix), we propose a method for building a Euclidean confidence ball around f, with prescribed probability of coverage. For each n, we describe its nonasymptotic property and show its optimality with respect to some criteria. 1. Introduction. In the present paper
Nonparametric Denoising of Signals with Unknown Local Structure, I: Oracle Inequalities
, 809
"... We consider the problem of pointwise estimation of multidimensional signals s, from noisy observations (yτ) on the regular grid Z d. Our focus is on the adaptive estimation in the case when the signal can be well recovered using a (hypothetical) linear filter, which can depend on the unknown signal ..."
Abstract
 Add to MetaCart
We consider the problem of pointwise estimation of multidimensional signals s, from noisy observations (yτ) on the regular grid Z d. Our focus is on the adaptive estimation in the case when the signal can be well recovered using a (hypothetical) linear filter, which can depend on the unknown signal itself. The basic setting of the problem we address here can be summarized as follows: suppose that the signal s is “wellfiltered”, i.e. there exists an adapted timeinvariant linear filter q ∗ T with the coefficients which vanish outside the “cube ” {0,...,T}d which recovers s0 from observations with small meansquared error. We suppose that we do not know the filter q ∗ , although, we do know that such a filter exists. We give partial answers to the following questions: – is it possible to construct an adaptive estimator of the value s0, which relies upon observations and recovers s0 with basically the same estimation error as the unknown filter q ∗ T? – how rich is the family of wellfiltered (in the above sense) signals? We show that the answer to the first question is affirmative and provide a numerically efficient construction of a nonlinear adaptive filter. Further, we establish a simple calculus of “wellfiltered ” signals, and show that their family is quite large: it contains, for instance, sampled smooth signals, sampled modulated smooth signals and sampled harmonic functions.