Results 1  10
of
40
Compressed sensing
 IEEE Trans. Inform. Theory
"... Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measureme ..."
Abstract

Cited by 1793 (18 self)
 Add to MetaCart
Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements can be dramatically smaller than the size. Thus, certain natural classes of images with pixels need only = ( 1 4 log 5 2 ()) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual pixel samples. More specifically, suppose has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)—so the coefficients belong to an ball for 0 1. The most important coefficients in that expansion allow reconstruction with 2 error ( 1 2 1
DeNoising By SoftThresholding
, 1992
"... Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an a ..."
Abstract

Cited by 827 (13 self)
 Add to MetaCart
Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti)+ zi, iid i =0;:::;n 1, ti = i=n, zi N(0; 1). The reconstruction fn ^ is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability ^ fn is at least as smooth as f, in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.
Wavelet shrinkage: asymptopia
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators bein ..."
Abstract

Cited by 241 (35 self)
 Add to MetaCart
Considerable e ort has been directed recently to develop asymptotically minimax methods in problems of recovering in nitedimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly or exactly minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons { sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coe cients towards the origin by an amount p p 2 log(n) = n. The method is di erent from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions { e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives { and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is amuch broader nearoptimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity.
Unconditional bases are optimal bases for data compression and for statistical estimation
 Applied and Computational Harmonic Analysis
, 1993
"... An orthogonal basis of L 2 which is also an unconditional basis of a functional space F is a kind of optimal basis for compressing, estimating, and recovering functions in F. Simple thresholding operations, applied in the unconditional basis, work essentially better for compressing, estimating, and ..."
Abstract

Cited by 144 (23 self)
 Add to MetaCart
An orthogonal basis of L 2 which is also an unconditional basis of a functional space F is a kind of optimal basis for compressing, estimating, and recovering functions in F. Simple thresholding operations, applied in the unconditional basis, work essentially better for compressing, estimating, and recovering than they do in any other orthogonal basis. In fact, simple thresholding in an unconditional basis works essentially better for recovery and estimation than other methods, period. (Performance is measured in an asymptotic minimax sense.) As an application, we formalize and prove Mallat's Heuristic, which says that wavelet bases are optimal for representing functions containing singularities, when there may be an arbitrary number of singularities, arbitrarily distributed.
The mathematics of learning: Dealing with data
 Notices of the American Mathematical Society
, 2003
"... Draft for the Notices of the AMS Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. 1 ..."
Abstract

Cited by 109 (15 self)
 Add to MetaCart
Draft for the Notices of the AMS Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. 1
Explicit Cost Bounds of Algorithms for Multivariate Tensor Product Problems
 J. Complexity
, 1994
"... We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the for ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c(d) + 2) fi 1 ` fi 2 + fi 3 ln 1=" d \Gamma 1 ' fi 4 (d\Gamma1) ` 1 " ' fi 5 : Here c(d) is the cost of one function evaluation (or one linear functional evaluation), and fi i 's do not depend on d; they are determined by the properties of the problem for d = 1. For certain tensor product problems, these cost bounds do not exceed c(d) K " \Gammap for some numbers K and p, both independent of d. We apply these general estimates to certain integration and approximation problems in the worst and average case settings. We also obtain an upper bound, which is independent of d, for the number, n("; d), of points for which discrepancy (with unequal weights) is at most ", n("; d) 7:26 ...
Random Approximation in Numerical Analysis
 Proceedings of the Conference &quot;Functional Analysis&quot; Essen
, 1994
"... this paper is twofold. In the first part (sections 2  6) I want to give a survey on recent developments of Monte Carlo complexity. This will include techniques to derive sharp lower bounds as well as the construction of concrete numerical methods which attain these optimal bounds. The field covered ..."
Abstract

Cited by 29 (22 self)
 Add to MetaCart
this paper is twofold. In the first part (sections 2  6) I want to give a survey on recent developments of Monte Carlo complexity. This will include techniques to derive sharp lower bounds as well as the construction of concrete numerical methods which attain these optimal bounds. The field covered here lies at the frontiers of several disciplines, among them theoretical computer science, numerical analysis, probability theory, approximation theory and to a large extent functional analysis. I want to stress the latter aspect and show how new techniques from Banach space and operator theory can be applied to Monte Carlo complexity. In the second part I want to present new results  the solution to a problem concering the Monte Carlo complexity of Fredholm integral equations. This will demonstrate in detail the general approach outlined in part one. We develop a new, fast algorithm  it is a combination of Monte Carlo methods with the Galerkin technique, an approach which seems to be new to this field. The basis functions used for the Galerkin discretization are orthogonal splines of minimal smoothness. They lead to an implementable procedure of minimal computational cost. The paper is organized as follows. In section 2, the main notions of informationbased complexity theory are explained. We cover both the deterministic and the stochastic setting in detail, also for the sake of later comparisons. Some relations to snumber theory are presented in section 3. The role of the average case in proofs of lower bounds for Monte Carlo methods is explained in Section 4. In the following three sections, we analyse the complexity of basic numerical problems: Section 5 deals with numerical integration and contains classical results on the complexity of Monte Carlo quadrature, toge...
Comparison of Radial Basis Function Interpolants
 In Multivariate Approximation. From CAGD to Wavelets
, 1995
"... This paper compares radial basis function interpolants on different spaces. The spaces are generated by other radial basis functions, and comparison is done via an explicit representation of the norm of the error functional. The results pose some new questions for further research. x1. Introduction ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
This paper compares radial basis function interpolants on different spaces. The spaces are generated by other radial basis functions, and comparison is done via an explicit representation of the norm of the error functional. The results pose some new questions for further research. x1. Introduction We consider interpolation of realvalued functions f defined on a set \Omega ` IR d ; d 1. These functions are evaluated on a set X := fx 1 ; : : : ; xNX g of NX 1 pairwise distinct points x 1 ; : : : ; xNX in \Omega\Gamma If N 2; d 2 and\Omega ` IR d are given with\Omega containing at least an interior point, it is well known that there is no Ndimensional space of continuous functions on\Omega that contains a unique interpolant for every f and every set X = fx 1 ; : : : ; xNX g ae\Omega ` IR d consisting of N = NX data points. Thus the family of interpolants must necessarily depend on X. This can easily be achieved by using translates \Phi(x \Gamma x j ) of a single continu...
Optimal Asymptotic Identification Under Bounded Disturbances
 IEEE Trans. Automat. Contr
"... This paper investigates the intrinsic limitation of worstcase identification of LTI systems using data corrupted by bounded disturbances, when the unknown plant is known to belong to a given model set. This is done by analyzing the optimal worstcase asymptotic error achievable by performing experi ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
This paper investigates the intrinsic limitation of worstcase identification of LTI systems using data corrupted by bounded disturbances, when the unknown plant is known to belong to a given model set. This is done by analyzing the optimal worstcase asymptotic error achievable by performing experiments using any bounded inputs and estimating the plant using any identification algorithm. First, it is shown that under some topological conditions on the model set, there is an identification algorithm which is asymptotically optimal for any input. Characterization of the optimal asymptotic error as a function of the inputs is also obtained. These results hold for any error metric and disturbance norm. Second, these general results are applied to three specific identification problems: identification of stable systems in the ` 1 norm, identification of stable rational systems in the H1 norm, and identification of unstable rational systems in the gap metric. For each of these problems, the...
Universal Near Minimaxity of Wavelet Shrinkage
, 1995
"... We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coefficients towards the origin by an amount p 2 log(n) \Delta oe= p n. The method is nearly minimax for a wide variety of loss functions  e.g. pointwise error, global error measured in L p ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coefficients towards the origin by an amount p 2 log(n) \Delta oe= p n. The method is nearly minimax for a wide variety of loss functions  e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives  and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is a broader nearoptimality than anything previously proposed in the minimax literature. The theory underlying the method exploits a correspondence between statistical questions and questions of optimal recovery and informationbased complexity. This paper contains a detailed proof of the result announced in Donoho, Johnstone, Kerkyacharian & Picard (1995).