Results 1 
5 of
5
Generalization Performance of Regularization Networks and Support . . .
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2001
"... We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hy ..."
Abstract

Cited by 73 (20 self)
 Add to MetaCart
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinitedimensional unit ball in feature space into a finitedimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator, can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines.
Covering Number Bounds of Certain Regularized Linear Function Classes
 Journal of Machine Learning Research
, 2002
"... Recently, sample complexity bounds have been derived for problems involving linear functions such as neural networks and support vector machines. In many of these theoretical studies, the concept of covering numbers played an important role. It is thus useful to study covering numbers for linear ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
Recently, sample complexity bounds have been derived for problems involving linear functions such as neural networks and support vector machines. In many of these theoretical studies, the concept of covering numbers played an important role. It is thus useful to study covering numbers for linear function classes. In this paper, we investigate two closely related methods to derive upper bounds on these covering numbers. The first method, already employed in some earlier studies, relies on the socalled Maurey's lemma; the second method uses techniques from the mistake bound framework in online learning. We compare results from these two methods, as well as their consequences in some learning formulations.
Mean Topological Dimension
 Israel J. Math
, 2000
"... . In this paper we present some results and applications of a new invariant for dynamical systems that can be viewed as a dynamical analogue of topological dimension. This invariant has been introduced by M. Gromov, and enables one to assign a meaningful quantity to dynamical systems of infinite ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
. In this paper we present some results and applications of a new invariant for dynamical systems that can be viewed as a dynamical analogue of topological dimension. This invariant has been introduced by M. Gromov, and enables one to assign a meaningful quantity to dynamical systems of infinite topological dimension and entropy. We also develop an alternative approach that is metric dependent and is intimately related to topological entropy. 1. Introduction One of the basic invariants of a dynamical system (X; T ) is its topological entropy. This quantifies to what extent nearby points diverge as the system evolves. For the shift on f1; 2; : : : ; kg Z , the topological entropy is log k and thus gives a dynamical interpretation of the cardinality of the set of states. For the shift K Z , where K is an infinite compact space, this invariant is always +1, and thus gives no information about K other than the fact that it is infinite. Recently, M. Gromov suggested a definition...
Entropy Numbers, Operators and Support Vector Kernels
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... We derive new bounds for the generalization error of feature space machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs are based on a viewpoint that is apparently novel in the field of statistical learning theory ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We derive new bounds for the generalization error of feature space machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs are based on a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite dimensional unit ball in feature space into a finite dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator, can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence we are able to theoretically explain the effect of the choice of kernel functions on the generalization performance of support vector machines.
Produced as part of the ESPRIT Working Group in Neural and Computational Learning II,
, 1998
"... We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use ..."
Abstract
 Add to MetaCart
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use