Results 1  10
of
92
Compressed sensing
 IEEE Trans. Inform. Theory
"... Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measureme ..."
Abstract

Cited by 1730 (18 self)
 Add to MetaCart
Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements can be dramatically smaller than the size. Thus, certain natural classes of images with pixels need only = ( 1 4 log 5 2 ()) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual pixel samples. More specifically, suppose has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)—so the coefficients belong to an ball for 0 1. The most important coefficients in that expansion allow reconstruction with 2 error ( 1 2 1
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 832 (16 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Regularization Theory and Neural Networks Architectures
 Neural Computation
, 1995
"... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..."
Abstract

Cited by 309 (31 self)
 Add to MetaCart
We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, som...
On the mathematical foundations of learning
 Bulletin of the American Mathematical Society
, 2002
"... The problem of learning is arguably at the very core of the problem of intelligence, both biological and arti cial. T. Poggio and C.R. Shelton ..."
Abstract

Cited by 223 (12 self)
 Add to MetaCart
The problem of learning is arguably at the very core of the problem of intelligence, both biological and arti cial. T. Poggio and C.R. Shelton
A Factorization Approach to Grouping
 in European Conference on Computer Vision
, 1998
"... The foreground group in a scene may be `discovered' and computed as a factorized approximation to the pairwise affinity of the elements in the scene. A pointwise approximation of the pairwise affinity information may in fact be interpreted as a `saliency' index, and the foreground of the scene m ..."
Abstract

Cited by 150 (0 self)
 Add to MetaCart
The foreground group in a scene may be `discovered' and computed as a factorized approximation to the pairwise affinity of the elements in the scene. A pointwise approximation of the pairwise affinity information may in fact be interpreted as a `saliency' index, and the foreground of the scene may be obtained by thresholding it. An algorithm called `affinity factorization' is thus obtained which may be used for grouping.
Data compression and harmonic analysis
 IEEE Trans. Inform. Theory
, 1998
"... In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory... ..."
Abstract

Cited by 140 (24 self)
 Add to MetaCart
In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory...
Deformable Kernels for Early Vision
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1991
"... Early vision algorithms often have a first stage of linearfiltering that `extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the spac ..."
Abstract

Cited by 131 (9 self)
 Add to MetaCart
Early vision algorithms often have a first stage of linearfiltering that `extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the space of scales and orientations in order to reduce computation and storage costs. This discretization produces anisotropies due to a loss of traslation, rotation, scalinginvariance that makes early vision algorithms less precise and more difficult to design. This need not be so: one can compute and store efficiently the response of families of linear filters defined on a continuum of orientations and scales. A technique is presented that allows (1) to compute the best approximation of a given family using linear combinations of a small number of `basis' functions; (2) to describe all finitedimensional families, i.e. the families of filters for which a finite dimensional representation is p...
SteerableScalable Kernels for Edge Detection and Junction Analysis
 Image and Vision Computing
, 1992
"... Families of kernels that are useful in a variety of early vision algorithms may be obtained by rotating and scaling in a continuum a `template' kernel. These multiscale multiorientation family may be approximated by linear interpolation of a discrete finite set of appropriate `basis' kernels. A sc ..."
Abstract

Cited by 80 (1 self)
 Add to MetaCart
Families of kernels that are useful in a variety of early vision algorithms may be obtained by rotating and scaling in a continuum a `template' kernel. These multiscale multiorientation family may be approximated by linear interpolation of a discrete finite set of appropriate `basis' kernels. A scheme for generating such a basis together with the appropriate interpolation weights is described. Unlike previous schemes by Perona, and Simoncelli et al. it is guaranteed to generate the most parsimonious one. Additionally, it is shown how to exploit two symmetries in edgedetection kernels for reducing storage and computational costs and generating simultaneously endstop and junctiontuned filters for free.
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
The Partition of Unity Method
 International Journal of Numerical Methods in Engineering
, 1996
"... A new finite element method is presented that features the ability to include in the finite element space knowledge about the partial differential equation being solved. This new method can therefore be more efficient than the usual finite element methods. An additional feature of the partitionofu ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
A new finite element method is presented that features the ability to include in the finite element space knowledge about the partial differential equation being solved. This new method can therefore be more efficient than the usual finite element methods. An additional feature of the partitionofunity method is that finite element spaces of any desired regularity can be constructed very easily. This paper includes a convergence proof of this method and illustrates its efficiency by an application to the Helmholtz equation for high wave numbers. The basic estimates for aposteriori error estimation for this new method are also proved. Key words: Finite element method, meshless finite element method, finite element methods for highly oscillatory solutions TICAM, The University of Texas at Austin, Austin, TX 78712. Research was partially supported by US Office of Naval Research under grant N0001490J1030 y Seminar for Applied Mathematics, ETH Zurich, CH8092 Zurich, Switzerland....