Results 1  10
of
29
Sparse Grids for Boundary Integral Equations
, 1998
"... The potential of sparse grid discretizations for solving boundary integral equations is studied for the screen problem on a square in IR 3 . Theoretical and numerical results on approximation rates, preconditioning, adaptivity and compression for piecewise constant and linear sparse grid spaces ar ..."
Abstract

Cited by 24 (16 self)
 Add to MetaCart
The potential of sparse grid discretizations for solving boundary integral equations is studied for the screen problem on a square in IR 3 . Theoretical and numerical results on approximation rates, preconditioning, adaptivity and compression for piecewise constant and linear sparse grid spaces are obtained. Classification: 45L10, 65N38, 65R20, 65Y20 Keywords: boundary element method, sparse grids, adaptivity, prewavelets, matrix compression 1 Introduction This is a case study for some special boundary integral equations on a twodimensional manifold \Gamma in IR 3 (screen problems). We will focus on the example of a twodimensional unit square in IR 2 embedded into IR 3 where \Gamma = fx : (x 1 ; x 2 ) 2 [0; 1] 2 ; x 3 = 0g : (1) In general, d\Gamma x stands for the surface Lebesgue measure with respect to the variable x, jxj 2 denotes the Euclidean norm of x, and n x is the vector field of normal vectors associated with \Gamma. We specifically have in mind the single lay...
Sparse grids and related approximation schemes for higher dimensional problems
"... The efficient numerical treatment of highdimensional problems is hampered by the curse of dimensionality. We review approximation techniques which overcome this problem to some extent. Here, we focus on methods stemming from Kolmogorov’s theorem, the ANOVA decomposition and the sparse grid approach ..."
Abstract

Cited by 24 (12 self)
 Add to MetaCart
The efficient numerical treatment of highdimensional problems is hampered by the curse of dimensionality. We review approximation techniques which overcome this problem to some extent. Here, we focus on methods stemming from Kolmogorov’s theorem, the ANOVA decomposition and the sparse grid approach and discuss their prerequisites and properties. Moreover, we present energynorm based sparse grids and demonstrate that, for functions with bounded mixed derivatives on the unit hypercube, the associated approximation rate in terms of the involved degrees of freedom shows no dependence on the dimension at all, neither in the approximation order nor in the order constant.
Nonlinear piecewise polynomial approximation beyond Besov spaces
 Appl. Comput. Harmonic Anal
"... We study nonlinear nterm approximation in Lp(R2) (0 < p < ∞) from Courant elements or (discontinuous) piecewise polynomials generated by multilevel nested triangulations of R2 which allow arbitrarily sharp angles. To characterize the rate of approximation we introduce and develop three families of ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We study nonlinear nterm approximation in Lp(R2) (0 < p < ∞) from Courant elements or (discontinuous) piecewise polynomials generated by multilevel nested triangulations of R2 which allow arbitrarily sharp angles. To characterize the rate of approximation we introduce and develop three families of smoothness spaces generated by multilevel nested triangulations. We call them Bspaces because they can be viewed as generalizations of Besov spaces. We use the Bspaces to prove Jackson and Bernstein estimates for nterm piecewise polynomial approximation and consequently characterize the corresponding approximation spaces by interpolation. We also develop methods for nterm piecewise polynomial approximation which capture the rates of the best approximation.
Approximation algorithms for wavelet transform coding of data streams
 In SODA ’06: Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm
, 2006
"... Abstract — This paper addresses the problem of finding a Bterm wavelet representation of a given discrete function f ∈ R n whose distance from f is minimized. The problem is well understood when we seek to minimize the Euclidean distance between f and its representation. The first known algorithms f ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
Abstract — This paper addresses the problem of finding a Bterm wavelet representation of a given discrete function f ∈ R n whose distance from f is minimized. The problem is well understood when we seek to minimize the Euclidean distance between f and its representation. The first known algorithms for finding provably approximate representations minimizing general ℓp distances (including ℓ∞) under a wide variety of compactly supported wavelet bases are presented in this paper. For the Haar basis, a polynomial time approximation scheme is demonstrated. These algorithms are applicable in the onepass sublinearspace data stream model of computation. They generalize naturally to multiple dimensions and weighted norms. A universal representation that provides a provable approximation guarantee under all pnorms simultaneously; and the first approximation algorithms for bitbudget versions of the problem, known as adaptive quantization, are also presented. Further, it is shown that the algorithms presented here can be used to select a basis from a treestructured dictionary of bases and find a Bterm representation of the given function that provably approximates its best dictionarybasis representation. Index Terms — Adaptive quantization, best basis selection, compactly supported wavelets, nonlinear approximation, sparse representation, streaming algorithms, transform coding, universal representation. I.
On the regularity of the electronic Schrödinger equation in Hilbert spaces of . . .
, 2002
"... ..."
Wavelet Methods for Second Order Elliptic Problems, Preconditioning and Adaptivity
 SIAM J. Sci. Comp
, 1997
"... : Wavelet methods allow to combine high order accuracy, efficient preconditioning techniques and adaptive approximation, in order to solve efficiently elliptic operator equations. Many difficulties remain, in particular related to the adaptation of wavelet decompositions to bounded domains with pres ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
: Wavelet methods allow to combine high order accuracy, efficient preconditioning techniques and adaptive approximation, in order to solve efficiently elliptic operator equations. Many difficulties remain, in particular related to the adaptation of wavelet decompositions to bounded domains with prescribed boundary conditions, as well as the possibly high constants in the O(1) preconditioning. In this paper we consider second order operators on tensor product domains. For such domains, we discuss the construction of high order multiresolution approximation and wavelet bases, and in particular the choice of the wavelets near the boundary in order to optimize the efficiency of the diagonal preconditioning of elliptic operators. In order to improve the constants obtained by such simple diagonal preconditioning, we propose an almost diagonal preconditioner based on solving local PetrovGalerkin problems. The efficiency of this method is illustrated by solving elliptic second order problems ...
The averaging lemma
 J. Amer. Math. Soc
"... Averaging lemmas arise in the study of regularity of solutions to nonlinear transport equations. The present paper shows how techniques from Harmonic Analysis, such as wavelet decompositions, maximal functions, and interpolation, can be used to prove averaging lemmas and to establish their sharpness ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Averaging lemmas arise in the study of regularity of solutions to nonlinear transport equations. The present paper shows how techniques from Harmonic Analysis, such as wavelet decompositions, maximal functions, and interpolation, can be used to prove averaging lemmas and to establish their sharpness.
The polyharmonic local sine transform: A new tool for local image analysis and synthesis without edge effect
 Applied and Computational Harmonic Analysis
, 2006
"... We introduce a new local sine transform that can completely localize image information both in the space domain and in the spatial frequency domain. This transform, which we shall call the polyharmonic local sine transform (PHLST), first segments an image into local pieces using the characteristic f ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
We introduce a new local sine transform that can completely localize image information both in the space domain and in the spatial frequency domain. This transform, which we shall call the polyharmonic local sine transform (PHLST), first segments an image into local pieces using the characteristic functions, then decomposes each piece into two components: the polyharmonic component and the residual. The polyharmonic component is obtained by solving the elliptic boundary value problem associated with the socalled polyharmonic equation (e.g., Laplace’s equation, biharmonic equation, etc.) given the boundary values (the pixel values along the boundary created by the characteristic function). Subsequently this component is subtracted from the original local piece to obtain the residual. Since the boundary values of the residual vanish, its Fourier sine series expansion has quickly decaying coefficients. Consequently, PHLST can distinguish intrinsic singularities in the data from the artificial discontinuities created by the local windowing. Combining this ability with the quickly decaying coefficients of the residuals, PHLST is also effective for image approximation, which we demonstrate using both synthetic and real images. In addition, we introduce the polyharmonic local Fourier transform (PHLFT) by replacing the Fourier sine series above by the complex Fourier series. With a slight sacrifice of the decay rate of the expansion coefficients, PHLFT allows one to compute local Fourier magnitudes and phases without revealing the edge effect (or Gibbs phenomenon), yet is invertible and useful for various filtering, analysis, and approximation purposes.
High Dimensional Smoothing Based on Multilevel Analysis
"... A fundamental issue in Data Mining is the development of algorithms to extract useful information from very large databases. One important technique is to estimate a smooth function approximating the data. Such an approximation can for example be used for visualisation, prediction, or classification ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
A fundamental issue in Data Mining is the development of algorithms to extract useful information from very large databases. One important technique is to estimate a smooth function approximating the data. Such an approximation can for example be used for visualisation, prediction, or classification purposes. However, the number of observations can be of the order of millions and there may be hundreds of variables recorded so one has to deal with the socalled "curse of dimensionality". The algorithmic complexity of this process is typically of the order # 3d2 where # is the number of grid points in each dimension and d is the number of dimensions. We propose a method for approximating a high dimensional surface by computing a projection onto multilevel spaces of low density and we demonstrate that the algorithmic complexity of this method is proportional to (j d1 (2 j+1  1)) 3 , where j = #log 2 ##  a substantial reduction in computational work. In addition, we show that the approximation error is proportional to j+d1 d1 2 2j , with the proportionality constant depending on the smoothness of the computed surface.