Results 1  10
of
14
Computation of differential operators in wavelet coordinates
 Math. Comp
, 2004
"... and DeVore proposed an adaptive wavelet algorithm for solving general operator equations. Assuming that the operator defines a boundedly invertible mapping between a Hilbert space and its dual, and that a Riesz basis of wavelet type for this Hilbert space is available, the operator equation is trans ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
and DeVore proposed an adaptive wavelet algorithm for solving general operator equations. Assuming that the operator defines a boundedly invertible mapping between a Hilbert space and its dual, and that a Riesz basis of wavelet type for this Hilbert space is available, the operator equation is transformed into an equivalent wellposed infinite matrixvector system. This system is solved by an iterative method, where each application of the infinite stiffness matrix is replaced by an adaptive approximation. It was shown that if the errors of best linear combinations from the wavelet basis with N terms are O(N −s) for some s> 0, which is determined by the Besov regularity of the solution and the order of the wavelet basis, then approximations yielded by the adaptive method with N terms also have errors of O(N −s). Moreover, their computation takes only O(N) operations, provided s < s ∗ , with s ∗ being a measure how well the infinite stiffness matrix with respect to the wavelet basis can be approximated by computable sparse matrices. Under appropriate conditions on the wavelet basis, for both differential and singular integral operators and for the relevant range of s, in [SIAM J. Math. Anal., 35(5) (2004), pp. 1110–1132] we showed that s ∗> s, assuming that each entry of the stiffness matrix is exactly available at unit cost. Generally these entries have to be approximated using numerical quadrature. In this paper, restricting us to differential operators, we develop a numerical integration scheme that computes these entries giving an additional error that is consistent with the approximation error, whereas in each column the average computational cost per entry is O(1). As a consequence, we can conclude that the adaptive wavelet algorithm has optimal computational complexity. 1.
A Note on Learning with Integral Operators
"... A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods, are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of al ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods, are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of algorithms, it is an important problem to be able to assess the quality of such approximations. The contribution of our paper is twofold: 1. We use a technique based on a concentration inequality for Hilbert spaces to provide new much simplified proofs for a number of results in spectral approximation. 2. Using these methods we provide several new results for estimating spectral properties of the graph Laplacian operator extending and strengthening results from [27].
Risk bounds for random regression graphs
 Foundations of Computational Mathematics
"... Abstract. We consider the regression problem and describe an algorithm approximating the regression function by estimators piecewise constant on the elements of an adaptive partition. The partitions are iteratively constructed by suitable random merges and splits, using cuts of arbitrary geometry. W ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. We consider the regression problem and describe an algorithm approximating the regression function by estimators piecewise constant on the elements of an adaptive partition. The partitions are iteratively constructed by suitable random merges and splits, using cuts of arbitrary geometry. We give a risk bound under the assumption that a “weak learning hypothesis” holds, and characterize this hypothesis in terms of a suitable RKHS. 1.
Spectral stability of the Neumann Laplacian
 J. Diff. Eq
"... We prove the equivalence of Hardy and Sobolevtype inequalities, certain uniform bounds on the heat kernel and some spectral regularity properties of the Neumann Laplacian associated with an arbitrary region of finite measure in Euclidean space. We also prove that if one perturbs the boundary of th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We prove the equivalence of Hardy and Sobolevtype inequalities, certain uniform bounds on the heat kernel and some spectral regularity properties of the Neumann Laplacian associated with an arbitrary region of finite measure in Euclidean space. We also prove that if one perturbs the boundary of the region within a uniform Hölder category then the eigenvalues of the Neumann Laplacian change by a small and explicitly estimated amount.
INVERSETYPE ESTIMATES ON hpFINITE ELEMENT SPACES AND APPLICATIONS
"... Abstract. This work is concerned with the development of inversetype inequalities for piecewise polynomial functions and, in particular, functions belonging to hpfinite element spaces. The cases of positive and negative Sobolev norms are considered for both continuous and discontinuous finite elem ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. This work is concerned with the development of inversetype inequalities for piecewise polynomial functions and, in particular, functions belonging to hpfinite element spaces. The cases of positive and negative Sobolev norms are considered for both continuous and discontinuous finite element functions.The inequalities are explicit both in the local polynomial degree and the local mesh size.The assumptions on the hpfinite element spaces are very weak, allowing anisotropic (shapeirregular) elements and varying polynomial degree across elements. Finally, the new inversetype inequalities are used to derive bounds for the condition number of symmetric stiffness matrices of hpboundary element method discretisations of integral equations, with elementwise discontinuous basis functions constructed via scaled tensor products of Legendre polynomials. 1.
Scattering Into Cones And Flux Across Surfaces In Quantum Mechanics: A Pathwise Probabilistic Approach
"... We show how the scatteringintocones and uxacrosssurfaces theorems in Quantum Mechanics have very intuitive pathwise probabilistic versions based on some results by Carlen about large time behaviour of paths of Nelson's diusions. The quantum mechanical results can be then recovered by taking expe ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We show how the scatteringintocones and uxacrosssurfaces theorems in Quantum Mechanics have very intuitive pathwise probabilistic versions based on some results by Carlen about large time behaviour of paths of Nelson's diusions. The quantum mechanical results can be then recovered by taking expectations in our pathwise statements.
Convergence of TimeDependent Turing Structures to a Stationary Solution
"... Abstract. Stability of stationary solutions of parabolic equations is conventionally studied by linear stability analysis, Lyapunov functions or lower and upper functions. We discuss here another approach based on differential inequalities written for the L 2 norm of the solution. This method is app ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Stability of stationary solutions of parabolic equations is conventionally studied by linear stability analysis, Lyapunov functions or lower and upper functions. We discuss here another approach based on differential inequalities written for the L 2 norm of the solution. This method is appropriate for the equations with time dependent coefficients. It yields new results and is applicable when the usual linearization method is not applicable. Key words: parabolic systems, stationary solutions, stability, differential inequalities
On Learning with Integral Operators Lorenzo Rosasco ∗ Center for Biological and Computational Learning
"... A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of alg ..."
Abstract
 Add to MetaCart
A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of algorithms, it is an important problem to be able to assess the quality of such approximations. The contribution of our paper is twofold: 1. We use a technique based on a concentration inequality for Hilbert spaces to provide new much simplified proofs for a number of results in spectral approximation. 2. Using these methods we provide several new results for estimating spectral properties of the graph Laplacian operator extending and strengthening results from von Luxburg et al. (2008). Keywords: methods 1.
EMPIRICAL OPERATORS
, 2008
"... Abstract. A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analy ..."
Abstract
 Add to MetaCart
Abstract. A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of algorithms, it is an important problem to be able to assess the quality of such approximations. The contribution of our paper is twofold: 1. We use a technique based on a concentration inequality for Hilbert spaces to provide new much simplified proofs for a number of results in spectral approximation. 2. Using these methods we provide several new results for estimating spectral properties of the graph Laplacian operator extending and strengthening results from [26]. 1.