• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

n-widths in approximation theory (1985)

by A Pinkus
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 189
Next 10 →

Compressed sensing

by Yaakov Tsaig, David L. Donoho , 2004
"... We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal numbe ..."
Abstract - Cited by 3625 (22 self) - Add to MetaCart
We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. The samples are nonadaptive and measure ‘random’ linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible `1 norm. We perform a series of numerical experiments which validate in general terms the basic idea proposed in [14, 3, 5], in the favorable case where the transform coefficients are sparse in the strong sense that the vast majority are zero. We then consider a range of less-favorable cases, in which the object has all coefficients nonzero, but the coefficients obey an `p bound, for some p ∈ (0, 1]. These experiments show that the basic inequalities behind the CS method seem to involve reasonable constants. We next consider synthetic examples modelling problems in spectroscopy and image pro-
(Show Context)

Citation Context

...owed by Garnaev and Gluskin [19], implicitly considered the random signs ensemble in the dual problem of Kolmogorov n-widths. Owing to a duality relationship between Gel’fand and Kolmogorov n-widths (=-=[26]-=-), and a relationship between Gel’fand n-widths and compressed sensing [14, 27] these matrices are suitable for use in the case p = 1. • Donoho [12, 13, 14] considered the uniform Spherical ensemble. ...

Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

by Emmanuel J. Candès , Terence Tao , 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract - Cited by 1513 (20 self) - Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a power-law (or if the coefficient sequence of f in a fixed basis decays like a power-law), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude |f | (1) ≥ |f | (2) ≥... ≥ |f | (N), and define the weak-ℓp ball as the class F of those elements whose entries obey the power decay law |f | (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are N-dimensional Gaussian

Compressive sampling

by Emmanuel J. Candès , 2006
"... Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired res ..."
Abstract - Cited by 1441 (15 self) - Add to MetaCart
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.

Regularization Theory and Neural Networks Architectures

by Federico Girosi, Michael Jones, Tomaso Poggio - Neural Computation , 1995
"... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..."
Abstract - Cited by 395 (32 self) - Add to MetaCart
We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, som...

On the mathematical foundations of learning

by Felipe Cucker, Steve Smale - Bulletin of the American Mathematical Society , 2002
"... The problem of learning is arguably at the very core of the problem of intelligence, both biological and arti cial. T. Poggio and C.R. Shelton ..."
Abstract - Cited by 330 (12 self) - Add to MetaCart
The problem of learning is arguably at the very core of the problem of intelligence, both biological and arti cial. T. Poggio and C.R. Shelton

The Partition of Unity Method

by I. Babuska, J. M. Melenk - International Journal of Numerical Methods in Engineering , 1996
"... A new finite element method is presented that features the ability to include in the finite element space knowledge about the partial differential equation being solved. This new method can therefore be more efficient than the usual finite element methods. An additional feature of the partition-of-u ..."
Abstract - Cited by 211 (2 self) - Add to MetaCart
A new finite element method is presented that features the ability to include in the finite element space knowledge about the partial differential equation being solved. This new method can therefore be more efficient than the usual finite element methods. An additional feature of the partition-of-unity method is that finite element spaces of any desired regularity can be constructed very easily. This paper includes a convergence proof of this method and illustrates its efficiency by an application to the Helmholtz equation for high wave numbers. The basic estimates for a-posteriori error estimation for this new method are also proved. Key words: Finite element method, meshless finite element method, finite element methods for highly oscillatory solutions TICAM, The University of Texas at Austin, Austin, TX 78712. Research was partially supported by US Office of Naval Research under grant N00014--90--J1030 y Seminar for Applied Mathematics, ETH Zurich, CH--8092 Zurich, Switzerland....
(Show Context)

Citation Context

...proximation Spaces and n--Width An interesting issue in the context of finding good local approximation spaces is the question of optimality of local spaces. We measure optimality in terms of n-width =-=[32]-=-, i.e., in terms of error per degree of freedom for a whole class of functions: d(n; k \Delta k; S) = inf En sup f2S inf g2En kf \Gamma gk; where E n denotes an n-dimensional space, and S is the class...

A Factorization Approach to Grouping

by P. Perona, W. Freeman - in European Conference on Computer Vision , 1998
"... The foreground group in a scene may be `discovered' and computed as a factorized approximation to the pairwise affinity of the elements in the scene. A pointwise approximation of the pairwise affinity information may in fact be interpreted as a `saliency' index, and the foreground of t ..."
Abstract - Cited by 173 (0 self) - Add to MetaCart
The foreground group in a scene may be `discovered' and computed as a factorized approximation to the pairwise affinity of the elements in the scene. A pointwise approximation of the pairwise affinity information may in fact be interpreted as a `saliency' index, and the foreground of the scene may be obtained by thresholding it. An algorithm called `affinity factorization' is thus obtained which may be used for grouping.
(Show Context)

Citation Context

... the corresponding singular value. Calling (U; S; V ) the singular value decomposition of A, and U i the columns of U and oe 2 i = S i;i the singular values of A we have: p = oe 1 U 1 (3) Proof : See =-=[6]-=-. Notice that A = A T and therefore U = V . Also: since A = A T p is also equal to the eigenvector v 1 of A with largest eigenvaluesi : p =s1=2 1 v 1 . Let's take a look at Fig. 2 where the function p...

Deformable kernels for early vision

by Pietro Perona , 1991
"... Early vision algorithms often have a first stage of linear-filtering that 'extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsel ..."
Abstract - Cited by 145 (10 self) - Add to MetaCart
Early vision algorithms often have a first stage of linear-filtering that 'extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the space of scales and orientations in order to reduce computation and storage costs. This discretization produces anisotropies due to a loss of traslation-, rotation-, scaling-invariance that makes early vision algorithms less precise and more difficult to design. This need not be so: one can compute and store efficiently the response of families of linear filters defined on a continuum of orientations and scales. A technique is presented that allows (1) to compute the best approximation of a given family using linear combinations of a small number of 'basis' functions; (2) to describe all finite-dimensional families, i.e. the families of filters for which a finite dimensional representation is possible with no error. The technique is based on singular value decomposition and may be applied to generating filters in arbitrary dimensions. Experimental results are presented that demonstrate the applicability of the technique to generating multi-orientation multi-scale 2D edge-detection kernels. The implementation issues are also discussed.

A chronology of interpolation: From ancient astronomy to modern signal and image processing

by Erik Meijering - Proceedings of the IEEE , 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract - Cited by 105 (0 self) - Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolution-based interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
(Show Context)

Citation Context

...he Newton-Gauss formula, (9), and set out to answer the question of 24 For more detailed information on the development of approximation theory, see the recently published historical review by Pinkus =-=[253]-=-.sIV The Information and Communication Era PP-11 which one of the cotabular functions is represented by it. The answer, he proved, is that under certain conditions it represents the cardinal function ...

Steerable-Scalable Kernels for Edge Detection and Junction Analysis

by Pietro Perona - Image and Vision Computing , 1992
"... Families of kernels that are useful in a variety of early vision algorithms may be obtained by rotating and scaling in a continuum a `template' kernel. These multi-scale multi-orientation family may be approximated by linear interpolation of a discrete finite set of appropriate `basis' ker ..."
Abstract - Cited by 92 (1 self) - Add to MetaCart
Families of kernels that are useful in a variety of early vision algorithms may be obtained by rotating and scaling in a continuum a `template' kernel. These multi-scale multi-orientation family may be approximated by linear interpolation of a discrete finite set of appropriate `basis' kernels. A scheme for generating such a basis together with the appropriate interpolation weights is described. Unlike previous schemes by Perona, and Simoncelli et al. it is guaranteed to generate the most parsimonious one. Additionally, it is shown how to exploit two symmetries in edge-detection kernels for reducing storage and computational costs and generating simultaneously endstop- and junction-tuned filters for free.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University