Results 1 
8 of
8
A nearest hyperrectangle learning method
 Machine Learning
, 1991
"... Abstract. This paper presents a theory of learning called nested generalized exemplar (NGE) theory, in which learning is accomplished by storing objects in Euclidean nspace, E", as hyperrectangles. The hyperrectangles may be nested inside one another to arbitrary depth. In contrast to generalizatio ..."
Abstract

Cited by 168 (7 self)
 Add to MetaCart
Abstract. This paper presents a theory of learning called nested generalized exemplar (NGE) theory, in which learning is accomplished by storing objects in Euclidean nspace, E", as hyperrectangles. The hyperrectangles may be nested inside one another to arbitrary depth. In contrast to generalization processes that replace symbolic formulae by more general formulae, the NGE algorithm modifies hyperrectangles by growing and reshaping them in a welldefined fashion. The axes of these hyperrectangles are defined by the variables measured for each example. Each variable can have any range on the real line; thus the theory is not restricted to symbolic or binary values. This paper describes some advantages and disadvantages of NGE theory, positions it as a form of exemplarbased learning, and compares it to other inductive learning theories. An implementation has been tested in three different domains, for which results are presented below: prediction of breast cancer, classification of iris flowers, and prediction of survival times for heart attack patients. The results in these domains support the claim that NGE theory can be used to create compact representations with excellent predictive accuracy.
Inductive Inference, DFAs and Computational Complexity
 2nd Int. Workshop on Analogical and Inductive Inference (AII
, 1989
"... This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1 ..."
Abstract

Cited by 78 (1 self)
 Add to MetaCart
This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1
Online algorithms in machine learning
 IN FIAT, AND WOEGINGER., EDS., ONLINE ALGORITHMS: THE STATE OF THE ART
, 1998
"... The areas of OnLine Algorithms and Machine Learning are both concerned with problems of making decisions about the present based only on knowledge of the past. Although these areas differ in terms of their emphasis and the problems typically studied, there are a collection of results in Computation ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
The areas of OnLine Algorithms and Machine Learning are both concerned with problems of making decisions about the present based only on knowledge of the past. Although these areas differ in terms of their emphasis and the problems typically studied, there are a collection of results in Computational Learning Theory that fit nicely into the "online algorithms" framework. This survey article discusses some of the results, models, and open problems from Computational Learning Theory that seem particularly interesting from the point of view of online algorithms. The emphasis in this article is on describing some of the simpler, more intuitive results, whose proofs can be given in their entirity. Pointers to the literature are given for more sophisticated versions of these algorithms.
Sample compression, learnability, and the VapnikChervonenkis dimension
 MACHINE LEARNING
, 1995
"... Within the framework of paclearning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function r ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
Within the framework of paclearning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function receives a finite sample set consistent with some concept in C and chooses a subset of k examples as the compression set. The reconstruction function forms a hypothesis on X from a compression set of k examples. For any sample set of a concept in C the compression set produced by the compression function must lead to a hypothesis consistent with the whole original sample set when it is fed to the reconstruction function. We demonstrate that the existence of a sample compression scheme of fixedsize for a class C is sufficient to ensure that the class C is paclearnable. Previous work has shown that a class is paclearnable if and only if the VapnikChervonenkis (VC) dimension of the class i...
Characterizing PAClearnability of Semilinear Sets
 Inform. and Comput
, 1998
"... The learnability of the class of lettercounts of regular languages (semilinear sets) and other related classes of subsets of N d with respect to the distributionfree learning model of Valiant (PAClearning model) is characterized. Using the notion of reducibility among learning problems due to P ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The learnability of the class of lettercounts of regular languages (semilinear sets) and other related classes of subsets of N d with respect to the distributionfree learning model of Valiant (PAClearning model) is characterized. Using the notion of reducibility among learning problems due to Pitt and Warmuth called "prediction preserving reducibility," and a special case thereof, a number of positive and partially negative results are obtained. On the positive side the class of semilinear sets of dimension 1 or 2 is shown to be learnable when the integers are encoded in unary. On the neutral to negative side it is shown that when the integers are encoded in binary the learning problem for semilinear sets as well as a class of subsets of Z d much simpler than semilinear sets is as hard as learning DNF, a central open problem in the field. A number of hardness results for related learning problems are also given. 3 Most of the research reported herein was conducted while the aut...
Part 1: Overview of the Probably Approximately Correct (PAC) Learning Framework
, 1995
"... Here we survey some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We then c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Here we survey some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We then consider some criticisms of the PAC model and the extensions proposed to address these criticisms. Finally, we look briefly at other models recently proposed in computational learning theory.
Projective DNF formulae and their revision
 In Learning Theory and Kernel Machines, 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003
"... Valiant argued that biology imposes various constraints on learnability, and, motivated by these constraints, introduced his model of projection learning [14]. Projection learning aims to learn a target concept over some large domain, in this paper {0, 1} n, by learning some of its projections to a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Valiant argued that biology imposes various constraints on learnability, and, motivated by these constraints, introduced his model of projection learning [14]. Projection learning aims to learn a target concept over some large domain, in this paper {0, 1} n, by learning some of its projections to a class of smaller domains,
Journal of Machine Learning Research 3 (2002) 271301 Submitted 11/01; Revised 7/02; Published 10/02 On Online Learning of Decision Lists
 Journal of Machine Learning Research
, 2003
"... A fundamental open problem in computational learning theory is whether there is an attribute e#cient learning algorithm for the concept class of decision lists (Rivest, 1987; Blum, 1996). We consider a weaker problem, where the concept class is restricted to decision lists with D alternations. Fo ..."
Abstract
 Add to MetaCart
A fundamental open problem in computational learning theory is whether there is an attribute e#cient learning algorithm for the concept class of decision lists (Rivest, 1987; Blum, 1996). We consider a weaker problem, where the concept class is restricted to decision lists with D alternations. For this class, we present a novel online algorithm that achieves a mistake bound of O(r log n), where r is the number of relevant variables, and n is the total number of variables. The algorithm can be viewed as a strict generalization of the famous Winnow algorithm by Littlestone (1988), and improves the O(r log n) mistake bound of Balanced Winnow. Our bound is stronger than a similar PAClearning result of Dhagat and Hellerstein (1994). A combination of our algorithm with the algorithm suggested by Rivest (1987) might achieve even better bounds.