Results 1 
4 of
4
Sample compression, learnability, and the VapnikChervonenkis dimension
 MACHINE LEARNING
, 1995
"... Within the framework of paclearning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function r ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
Within the framework of paclearning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function receives a finite sample set consistent with some concept in C and chooses a subset of k examples as the compression set. The reconstruction function forms a hypothesis on X from a compression set of k examples. For any sample set of a concept in C the compression set produced by the compression function must lead to a hypothesis consistent with the whole original sample set when it is fed to the reconstruction function. We demonstrate that the existence of a sample compression scheme of fixedsize for a class C is sufficient to ensure that the class C is paclearnable. Previous work has shown that a class is paclearnable if and only if the VapnikChervonenkis (VC) dimension of the class i...
Unlabeled compression schemes for maximum classes
 Journal of Machine Learning Research
, 2006
"... Abstract. We give a compression scheme for any maximum class of VC dimension d that compresses any sample consistent with a concept in the class to at most d unlabeled points from the domain of the sample. 1 ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. We give a compression scheme for any maximum class of VC dimension d that compresses any sample consistent with a concept in the class to at most d unlabeled points from the domain of the sample. 1
Shifting: OneInclusion Mistake Bounds and Sample Compression
 EECS DEPARTMENT, UNIVERSITY OF CALIFORNIA, BERKELEY
, 2007
"... ..."
Geometric & Topological Representations of Maximum Classes with Applications to Sample Compression
"... We systematically investigate finite maximum classes, which play an important role in machine learning as concept classes meeting Sauer’s Lemma with equality. Simple arrangements of hyperplanes in Hyperbolic space are shown to represent maximum classes, generalizing the corresponding Euclidean resul ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We systematically investigate finite maximum classes, which play an important role in machine learning as concept classes meeting Sauer’s Lemma with equality. Simple arrangements of hyperplanes in Hyperbolic space are shown to represent maximum classes, generalizing the corresponding Euclidean result. We show that sweeping a generic hyperplane across such arrangements forms an unlabeled compression scheme of size VC dimension and corresponds to a special case of peeling the oneinclusion graph, resolving a conjecture of Kuzmin & Warmuth. A bijection between maximum classes and certain arrangements of PiecewiseLinear (PL) hyperplanes in either a ball or Euclidean space is established. Finally, we show that dmaximum classes corresponding to PL hyperplane arrangements in R d have cubical complexes homeomorphic to a dball, or equivalently complexes that are manifolds with boundary. 1