Results 1  10
of
12
Generalization Performance of Regularization Networks and Support . . .
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2001
"... We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hy ..."
Abstract

Cited by 73 (20 self)
 Add to MetaCart
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinitedimensional unit ball in feature space into a finitedimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator, can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines.
Random Approximation in Numerical Analysis
 Proceedings of the Conference "Functional Analysis" Essen
, 1994
"... this paper is twofold. In the first part (sections 2  6) I want to give a survey on recent developments of Monte Carlo complexity. This will include techniques to derive sharp lower bounds as well as the construction of concrete numerical methods which attain these optimal bounds. The field covered ..."
Abstract

Cited by 29 (22 self)
 Add to MetaCart
this paper is twofold. In the first part (sections 2  6) I want to give a survey on recent developments of Monte Carlo complexity. This will include techniques to derive sharp lower bounds as well as the construction of concrete numerical methods which attain these optimal bounds. The field covered here lies at the frontiers of several disciplines, among them theoretical computer science, numerical analysis, probability theory, approximation theory and to a large extent functional analysis. I want to stress the latter aspect and show how new techniques from Banach space and operator theory can be applied to Monte Carlo complexity. In the second part I want to present new results  the solution to a problem concering the Monte Carlo complexity of Fredholm integral equations. This will demonstrate in detail the general approach outlined in part one. We develop a new, fast algorithm  it is a combination of Monte Carlo methods with the Galerkin technique, an approach which seems to be new to this field. The basis functions used for the Galerkin discretization are orthogonal splines of minimal smoothness. They lead to an implementable procedure of minimal computational cost. The paper is organized as follows. In section 2, the main notions of informationbased complexity theory are explained. We cover both the deterministic and the stochastic setting in detail, also for the sake of later comparisons. Some relations to snumber theory are presented in section 3. The role of the average case in proofs of lower bounds for Monte Carlo methods is explained in Section 4. In the following three sections, we analyse the complexity of basic numerical problems: Section 5 deals with numerical integration and contains classical results on the complexity of Monte Carlo quadrature, toge...
Entropy Numbers, Operators and Support Vector Kernels
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... We derive new bounds for the generalization error of feature space machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs are based on a viewpoint that is apparently novel in the field of statistical learning theory ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We derive new bounds for the generalization error of feature space machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs are based on a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite dimensional unit ball in feature space into a finite dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator, can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence we are able to theoretically explain the effect of the choice of kernel functions on the generalization performance of support vector machines.
Duality of Metric Entropy
 Annals of Mathematics
, 2004
"... For two convex bodies K and T in R n, the covering number of K by T, denoted N(K,T), is defined as the minimal number of translates of T needed to cover K. Let us denote by K ◦ the polar body of K and by D the euclidean unit ball in R n. We prove that the two functions of t, N(K,tD) and N(D,tK ◦), a ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
For two convex bodies K and T in R n, the covering number of K by T, denoted N(K,T), is defined as the minimal number of translates of T needed to cover K. Let us denote by K ◦ the polar body of K and by D the euclidean unit ball in R n. We prove that the two functions of t, N(K,tD) and N(D,tK ◦), are equivalent in the appropriate sense, uniformly over symmetric convex bodies K ⊂ R n and over n ∈ N. In particular, this verifies the duality conjecture for entropy numbers of linear operators, posed by Pietsch in 1972, in the central case when either the domain or the range of the operator is a Hilbert space. 1
On approximation numbers of composition operators
 Journal Approx. Theory
"... Abstract. We show that the approximation numbers of a compact composition operator on the weighted Bergman spaces Bα of the unit disk can tend to 0 arbitrarily slowly, but that they never tend quickly to 0: they grow at least exponentially, and this speed of convergence is only obtained for symbols ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. We show that the approximation numbers of a compact composition operator on the weighted Bergman spaces Bα of the unit disk can tend to 0 arbitrarily slowly, but that they never tend quickly to 0: they grow at least exponentially, and this speed of convergence is only obtained for symbols which do not approach the unit circle. We also give an upper bounds and explicit an example.
A Geometric Lemma and Duality of Entropy Numbers
"... 1 Introduction We shall study in this note the following conjecture, to which we shall refer as the "Geometric Lemma"; we state it first in a somewhat imprecise form. Let n; N be positive integers with k: = log N o / n. If S ae IRn is a finite set whose cardinality doesn't exceed N and such that its ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
1 Introduction We shall study in this note the following conjecture, to which we shall refer as the "Geometric Lemma"; we state it first in a somewhat imprecise form. Let n; N be positive integers with k: = log N o / n. If S ae IRn is a finite set whose cardinality doesn't exceed N and such that its convex hull K: = conv S admits an equally small Euclidean 1net (i.e., K can be covered by no more than N translates of the unit Euclidean ball D), then 12 D 6ae K.
On the coverings of an ellipsoid in the Euclidean space
 IEEE Trans. Inform. Theory
, 2004
"... Abstract—The thinnest coverings of ellipsoids are studied in the Euclidean spaces of an arbitrary dimension. Given any ellipsoid, the main goal is to find itsentropy, which is the logarithm of the minimum number of the balls of radius needed to cover this ellipsoid. A tight asymptotic bound on the ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract—The thinnest coverings of ellipsoids are studied in the Euclidean spaces of an arbitrary dimension. Given any ellipsoid, the main goal is to find itsentropy, which is the logarithm of the minimum number of the balls of radius needed to cover this ellipsoid. A tight asymptotic bound on theentropy is obtained for all but the most oblong ellipsoids, which have very high eccentricity. This bound depends only on the volume of the subellipsoid spanned over all the axes of the original ellipsoid, whose length (diameter) exceeds P. The results can be applied to vector quantization performed when data streams from different sources are bundled together in one block. Index Terms—Covering, ellipsoid, entropy, Euclidean space, unit ball.
A Study About Algorithmic Stability and Its Relation to Generalization
, 2000
"... This technical report presents some results about how to control the generalization error for stable algorithms. We define a new notion of stable algorithm and derive confidence bounds. It is shown that regularization algorithms are stable when the regularization coefficient is large or when the `1 ..."
Abstract
 Add to MetaCart
This technical report presents some results about how to control the generalization error for stable algorithms. We define a new notion of stable algorithm and derive confidence bounds. It is shown that regularization algorithms are stable when the regularization coefficient is large or when the `1 norm of the kernel expansion of the outcome of the algorithm is small. These results try to show what kind of algorithmic properties are interesting for good generalization. 1 Introduction Since the work of Vapnik and Chervonenkis ([16],[14]), many efforts have been made to obtain practical confidence bounds for the generalization ability of learning systems. Classical results say that such bounds are dependent upon an exponential term and a complexity term which is, depending on the situation, the VCdimension, the fatshattering dimension or covering numbers. All these quantities have been bounded using combinatorial or functional analysis methods and have led to sample size which are gen...
Coverings and Metric Entropy Duality Talk at the Workshop on Asymptotic Theory of the Geometry of Finite Dimensional Spaces
"... In this talk we discuss results concerning Metric Entropy and Covering Numbers. The covering number of a convex body K by a convex body T, denoted N(K, T), is defined as the minimal number of translates of T needed to cover K. If T is the unit ball of one space, and K is the image of the unit ball o ..."
Abstract
 Add to MetaCart
In this talk we discuss results concerning Metric Entropy and Covering Numbers. The covering number of a convex body K by a convex body T, denoted N(K, T), is defined as the minimal number of translates of T needed to cover K. If T is the unit ball of one space, and K is the image of the unit ball of another space under some given compact linear operator u, the number N(K, tT) as a function of the parameter t quantifies in some sense the compactness of u. (Metric entropy in operator theory is the inverse of this function, precise definitions below). The computation of entropy numbers is usually extremely difficult. However, entropy numbers arise very naturally in the solution of many problems in analysis and in probability, for example in the study of gaussian processes; understanding their behavior is an important goal. A well known 30 year old conjecture in this field is the duality of entropy numbers conjecture, which in the language of covering numbers states that in an appropriate sense, as functions of t, the expressions N(K, tT) and N(T ◦ , tK ◦) are equivalent, where K ◦ and T ◦ denote the polar bodies of K and T respectively.
A REMARK ON TWO DUALITY RELATIONS
, 2008
"... Abstract. We remark that an easy combination of two known results yields a positive answer, up to log(n) terms, to a duality conjecture that goes back to Pietsch. In particular, we show that for any two symmetric convex bodies K, T in R n, denoting by N(K, T) the minimal number of translates of T ne ..."
Abstract
 Add to MetaCart
Abstract. We remark that an easy combination of two known results yields a positive answer, up to log(n) terms, to a duality conjecture that goes back to Pietsch. In particular, we show that for any two symmetric convex bodies K, T in R n, denoting by N(K, T) the minimal number of translates of T needed to cover K, one has: N(K, T) ≤ N(T ◦ , (C log(n)) −1 K ◦ ) C log(n)log log(n), where K ◦ , T ◦ are the polar bodies to K, T, respectively, and C ≥ 1 is a universal constant. As a corollary, we observe a new duality result (up to log(n) terms) for Talagrand’s γp functionals. 1.