Results 1  10
of
13
Compressed sensing
 IEEE Trans. Inform. Theory
"... Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measureme ..."
Abstract

Cited by 1730 (18 self)
 Add to MetaCart
Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements can be dramatically smaller than the size. Thus, certain natural classes of images with pixels need only = ( 1 4 log 5 2 ()) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual pixel samples. More specifically, suppose has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)—so the coefficients belong to an ball for 0 1. The most important coefficients in that expansion allow reconstruction with 2 error ( 1 2 1
Adaptive wavelet methods for elliptic operator equations— convergence rates
 Math. Comput
, 2001
"... Abstract. This paper is concerned with the construction and analysis of waveletbased adaptive algorithms for the numerical solution of elliptic equations. These algorithms approximate the solution u of the equation by a linear combination of N wavelets. Therefore, a benchmark for their performance ..."
Abstract

Cited by 109 (30 self)
 Add to MetaCart
Abstract. This paper is concerned with the construction and analysis of waveletbased adaptive algorithms for the numerical solution of elliptic equations. These algorithms approximate the solution u of the equation by a linear combination of N wavelets. Therefore, a benchmark for their performance is provided by the rate of best approximation to u by an arbitrary linear combination of N wavelets (so called Nterm approximation), which would be obtained by keeping the N largest wavelet coefficients of the real solution (which of course is unknown). The main result of the paper is the construction of an adaptive scheme which produces an approximation to u with error O(N −s)in the energy norm, whenever such a rate is possible by Nterm approximation. The range of s>0 for which this holds is only limited by the approximation properties of the wavelets together with their ability to compress the elliptic operator. Moreover, it is shown that the number of arithmetic operations needed to compute the approximate solution stays proportional to N. The adaptive algorithm applies to a wide class of elliptic problems and wavelet bases. The analysis in this paper puts forward new techniques for treating elliptic problems as well as the linear systems of equations that arise from the wavelet discretization. 1.
Sickel: Optimal approximation of elliptic problems by linear and nonlinear mappings III
 Triebel, Function Spaces, Entropy Numbers, Differential Operators
, 1996
"... We study the optimal approximation of the solution of an operator equation A(u) = f by four types of mappings: a) linear mappings of rank n; b) nterm approximation with respect to a Riesz basis; c) approximation based on linear information about the right hand side f; d) continuous mappings. We co ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
We study the optimal approximation of the solution of an operator equation A(u) = f by four types of mappings: a) linear mappings of rank n; b) nterm approximation with respect to a Riesz basis; c) approximation based on linear information about the right hand side f; d) continuous mappings. We consider worst case errors, where f is an element of the unit ball of a Sobolev or Besov space Br q(Lp(Ω)) and Ω ⊂ Rd is a bounded Lipschitz domain; the error is always measured in the Hsnorm. The respective widths are the linear widths (or approximation numbers), the nonlinear widths, the Gelfand widths, and the manifold widths. As a technical tool, we also study the Bernstein numbers. Our main results are the following. If p ≥ 2 then the order of convergence is the same for all four classes of approximations. In particular, the best linear approximations are of the same order as the best nonlinear ones. The best linear approximation can be quite difficult to realize as a numerical algorithm since the optimal Galerkin space usually depends on the operator and of the shape of the domain Ω. For p < 2 there is a difference, nonlinear approximations are better than linear ones. However, in this case, it turns out that linear information about the right hand side f is again optimal. Our main theoretical tool is the best nterm approximation with respect to an optimal Riesz basis and related nonlinear widths. These general results are used to study the Poisson equation in a polygonal domain. It turns out that best nterm wavelet approximation is (almost) optimal. The main results of
On the fundamental limits of adaptive sensing
, 2011
"... Suppose we can sequentially acquire arbitrary linear measurements of an ndimensional vector x resulting in the linear model y = Ax + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy wh ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Suppose we can sequentially acquire arbitrary linear measurements of an ndimensional vector x resulting in the linear model y = Ax + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy which cleverly selects the next row of A based on what has been previously observed should do far better than a nonadaptive strategy which sets the rows of A ahead of time, thus not trying to learn anything about the signal in between observations. This paper shows that the folk theorem is false. We prove that the advantages offered by clever adaptive strategies and sophisticated estimation procedures—no matter how intractable—over classical compressed acquisition/recovery schemes are, in general, minimal.
Besov Regularity for Interface Problems
, 1998
"... This paper is concerned with the Besov regularity of the solutions to interface problems in a segment S of the unit disk in R 2 : We investigate the smoothness of the solutions as measured in the specific scale B s ø (L ø (S)); 1=ø = s=2+1=p; of Besov spaces which determines the order of approxim ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper is concerned with the Besov regularity of the solutions to interface problems in a segment S of the unit disk in R 2 : We investigate the smoothness of the solutions as measured in the specific scale B s ø (L ø (S)); 1=ø = s=2+1=p; of Besov spaces which determines the order of approximation that can be achieved by adaptive and nonlinear numerical schemes. The proofs are based on representations of the solution spaces which were derived by Kellogg [15] and on characterizations of Besov spaces by wavelet expansions. Key Words: Interface problems, adaptive methods, nonlinear approximation, Besov spaces, wavelets. AMS Subject classification: Primary 35B65, secondary 41A46, 46E35, 65N30. 1 Introduction In recent years, the use of adaptive schemes has become a widespread strategy in numerical analysis. In particular, adaptive algorithms have been successfully implemented for the numerical treatment of boundary value problems of the form Au = f on\Omega ae R d ; (1.1) Bu...
Uniform Reconstruction of Gaussian Processes
, 1995
"... We consider a Gaussian process X with smoothness comparable to the Brownian motion. We analyze reconstructions of X which are based on observations at finitely many points. For each realization of X the error is defined in a weighted supremum norm; the overall error of a reconstruction is defined as ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider a Gaussian process X with smoothness comparable to the Brownian motion. We analyze reconstructions of X which are based on observations at finitely many points. For each realization of X the error is defined in a weighted supremum norm; the overall error of a reconstruction is defined as the pth moment of this norm. We determine the rate of the minimal errors and provide different reconstruction methods which perform asymptotically optimal. In particular, we show that linear interpolation at the quantiles of a certain density is asymptotically optimal.