Results 1  10
of
77
Scalesensitive Dimensions, Uniform Convergence, and Learnability
, 1997
"... Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distributionfree convergence property of means to expectations uniformly over classes of random variables. Classes of realvalued functions enjoy ..."
Abstract

Cited by 208 (1 self)
 Add to MetaCart
Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distributionfree convergence property of means to expectations uniformly over classes of random variables. Classes of realvalued functions enjoying such a property are also known as uniform GlivenkoCantelli classes. In this paper we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform GlivenkoCantelli classes. Our characterization yields Dudley, Gin'e, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the VapnikChervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or "agnostic") framework. Furthermore, we show a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.
Introduction to Statistical Learning Theory
 In , O. Bousquet, U.v. Luxburg, and G. Rsch (Editors
, 2004
"... ..."
A few notes on Statistical Learning Theory
, 2003
"... this article is on the theoretical side and not on the applicative one; hence, we shall not present examples which may be interesting from the practical point of view but have little theoretical significance. This survey is far from being complete and it focuses on problems the author finds interest ..."
Abstract

Cited by 52 (10 self)
 Add to MetaCart
this article is on the theoretical side and not on the applicative one; hence, we shall not present examples which may be interesting from the practical point of view but have little theoretical significance. This survey is far from being complete and it focuses on problems the author finds interesting (an opinion which is not necessarily shared by the majority of the learning community). Relevant books which present a more evenly balanced approach are, for example [1, 4, 35, 36] The starting point of our discussion is the formulation of the learning problem. Consider a class G, consisting of real valued functions defined on a space #, and assume that each g G maps # into [0, 1]. Let T be an unknown function, T : # [0, 1] and set to be an unknown probability measure on #
On the Hardness of Being Truthful
 In 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS
, 2008
"... The central problem in computational mechanism design is the tension between incentive compatibility and computational ef ciency. We establish the rst significant approximability gap between algorithms that are both truthful and computationallyef cient, and algorithms that only achieve one of these ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
The central problem in computational mechanism design is the tension between incentive compatibility and computational ef ciency. We establish the rst significant approximability gap between algorithms that are both truthful and computationallyef cient, and algorithms that only achieve one of these two desiderata. This is shown in the context of a novel mechanism design problem which we call the COMBINATORIAL PUBLIC PROJECT PROBLEM (CPPP). CPPP is an abstraction of many common mechanism design situations, ranging from elections of kibbutz committees to network design. Our result is actually made up of two complementary results – one in the communicationcomplexity model and one in the computationalcomplexity model. Both these hardness results heavily rely on a combinatorial characterization of truthful algorithms for our problem. Our computationalcomplexity result is one of the rst impossibility results connecting mechanism design to complexity theory; its novel proof technique involves an application of the SauerShelah Lemma and may be of wider applicability, both within and without mechanism design. 1
Clustering for EdgeCost Minimization
"... Leonard J. Schulman College of Computing Georgia Institute of Technology Atlanta GA 303320280 ABSTRACT We address the problem of partitioning a set of n points into clusters, so as to minimize the sum, over all intracluster pairs of points, of the cost associated with each pair. We obtain a ra ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
Leonard J. Schulman College of Computing Georgia Institute of Technology Atlanta GA 303320280 ABSTRACT We address the problem of partitioning a set of n points into clusters, so as to minimize the sum, over all intracluster pairs of points, of the cost associated with each pair. We obtain a randomized approximation algorithm for this problem, for the cost functions ` 2 2 ; `1 and `2 , as well as any cost function isometrically embeddable in ` 2 2 .
Projections of bodies and hereditary properties of hypergraphs
 Bull. London Math. Soc
, 1995
"... We prove that for every Mdimensional body K, there is a rectangular parallelepiped B of the same volume as K, such that the projection of B onto any coordinate subspace is at most as large as that of the corresponding projection of K. We apply this theorem to projections of finite set systems and t ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
We prove that for every Mdimensional body K, there is a rectangular parallelepiped B of the same volume as K, such that the projection of B onto any coordinate subspace is at most as large as that of the corresponding projection of K. We apply this theorem to projections of finite set systems and to hereditary properties. In particular, we show that every hereditary property of uniform hypergraphs has a limiting density. 1. Projections of bodies Let AT be a body in U n, and let (u19..., vn) be the standard basis for IR n. Denote the volume of K by \K\. Furthermore, given a subset A e [n] — {1,2,...,«} with d elements, denote by KA the orthogonal projection of K onto the subspace spanned by {vt'. ieA}, and by \KA \ its (//dimensional) volume. Thus KM = K. By the term box we shall mean a rectangular parallelepiped whose sides are parallel to the coordinate axes. For the purposes of this paper, a body can be taken to be a compact subset of U n which is the closure of its interior. It would be effortless to rewrite our results and
On the complexity of approximating the vc dimension
 J. Comput. Syst. Sci
, 2001
"... We study the complexity of approximating the VC dimension of a collection of sets, when the sets are encoded succinctly by a small circuit. We show that this problem is • Σ p 3hard to approximate to within a factor 2 − ɛ for any ɛ> 0, • approximable in AM to within a factor 2, and • AMhard to appr ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
We study the complexity of approximating the VC dimension of a collection of sets, when the sets are encoded succinctly by a small circuit. We show that this problem is • Σ p 3hard to approximate to within a factor 2 − ɛ for any ɛ> 0, • approximable in AM to within a factor 2, and • AMhard to approximate to within a factor N ɛ for some constant ɛ> 0. To obtain the Σ p 3hardness result we solve a randomness extraction problem using listdecodable binary codes; for the positive result we utilize the SauerShelah(Perles) Lemma. The exact value of ɛ in the AMhardness result depends on the degree achievable by explicit disperser constructions. 1.
VC Dimension of Neural Networks
 Neural Networks and Machine Learning
, 1998
"... . This paper presents a brief introduction to VapnikChervonenkis (VC) dimension, a quantity which characterizes the difficulty of distributionindependent learning. The paper establishes various elementary results, and discusses how to estimate the VC dimension in several examples of interest in ne ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
. This paper presents a brief introduction to VapnikChervonenkis (VC) dimension, a quantity which characterizes the difficulty of distributionindependent learning. The paper establishes various elementary results, and discusses how to estimate the VC dimension in several examples of interest in neural network theory. 1 Introduction In this expository paper, we present a brief introduction to the subject of computing and estimating the VC dimension of neural network architectures. We provide precise definitions and prove several basic results, discussing also how one estimates VC dimension in several examples of interest in neural network theory. We do not address the learning and estimationtheoretic applications of VC dimension. (Roughly, the VC dimension is a number which helps to quantify the difficulty when learning from examples. The sample complexity, that is, the number of "learning instances" that one must be exposed to, in order to be reasonably certain to derive accurate p...
Probabilistic Analysis of Learning in Artificial Neural Networks: The PAC Model and its Variants
, 1997
"... There are a number of mathematical approaches to the study of learning and generalization in artificial neural networks. Here we survey the `probably approximately correct' (PAC) model of learning and some of its variants. These models provide a probabilistic framework for the discussion of generali ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
There are a number of mathematical approaches to the study of learning and generalization in artificial neural networks. Here we survey the `probably approximately correct' (PAC) model of learning and some of its variants. These models provide a probabilistic framework for the discussion of generalization and learning. This survey concentrates on the sample complexity questions in these models; that is, the emphasis is on how many examples should be used for training. Computational complexity considerations are briefly discussed for the basic PAC model. Throughout, the importance of the VapnikChervonenkis dimension is highlighted. Particular attention is devoted to describing how the probabilistic models apply in the context of neural network learning, both for networks with binaryvalued output and for networks with realvalued output.
Quantifying the Amount of Verboseness
, 1995
"... We study the fine structure of the classification of sets of natural numbers A according to the number of queries which are needed to compute the nfold characteristic function of A. A complete characterization is obtained, relating the question to finite combinatorics. In order to obtain an explic ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We study the fine structure of the classification of sets of natural numbers A according to the number of queries which are needed to compute the nfold characteristic function of A. A complete characterization is obtained, relating the question to finite combinatorics. In order to obtain an explicit description we consider several interesting combinatorial problems. 1 Introduction In the theory of bounded queries, we measure the complexity of a function by the number of queries to an oracle which are needed to compute it. The field has developed in various directions, both in complexity theory and in recursion theory; see Gasarch [21] for a recent survey. One of the original concerns is the classification of sets A of natural numbers by their "query complexity," i.e., according to the number of oracle queries that are needed to compute the nfold characteristic function F A n = x 1 ; : : : ; x n : (ØA (x 1 ); : : : ; ØA (x n )). In [3, 8] a set A is called verbose iff F A n is com...