Results 1  10
of
11
A nearest hyperrectangle learning method
 Machine Learning
, 1991
"... Abstract. This paper presents a theory of learning called nested generalized exemplar (NGE) theory, in which learning is accomplished by storing objects in Euclidean nspace, E", as hyperrectangles. The hyperrectangles may be nested inside one another to arbitrary depth. In contrast to generali ..."
Abstract

Cited by 187 (7 self)
 Add to MetaCart
Abstract. This paper presents a theory of learning called nested generalized exemplar (NGE) theory, in which learning is accomplished by storing objects in Euclidean nspace, E", as hyperrectangles. The hyperrectangles may be nested inside one another to arbitrary depth. In contrast to generalization processes that replace symbolic formulae by more general formulae, the NGE algorithm modifies hyperrectangles by growing and reshaping them in a welldefined fashion. The axes of these hyperrectangles are defined by the variables measured for each example. Each variable can have any range on the real line; thus the theory is not restricted to symbolic or binary values. This paper describes some advantages and disadvantages of NGE theory, positions it as a form of exemplarbased learning, and compares it to other inductive learning theories. An implementation has been tested in three different domains, for which results are presented below: prediction of breast cancer, classification of iris flowers, and prediction of survival times for heart attack patients. The results in these domains support the claim that NGE theory can be used to create compact representations with excellent predictive accuracy.
Inductive Inference, DFAs and Computational Complexity
 2nd Int. Workshop on Analogical and Inductive Inference (AII
, 1989
"... This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1 ..."
Abstract

Cited by 91 (1 self)
 Add to MetaCart
(Show Context)
This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1
Online algorithms in machine learning
 IN FIAT, AND WOEGINGER., EDS., ONLINE ALGORITHMS: THE STATE OF THE ART
, 1998
"... The areas of OnLine Algorithms and Machine Learning are both concerned with problems of making decisions about the present based only on knowledge of the past. Although these areas differ in terms of their emphasis and the problems typically studied, there are a collection of results in Computation ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
(Show Context)
The areas of OnLine Algorithms and Machine Learning are both concerned with problems of making decisions about the present based only on knowledge of the past. Although these areas differ in terms of their emphasis and the problems typically studied, there are a collection of results in Computational Learning Theory that fit nicely into the "online algorithms" framework. This survey article discusses some of the results, models, and open problems from Computational Learning Theory that seem particularly interesting from the point of view of online algorithms. The emphasis in this article is on describing some of the simpler, more intuitive results, whose proofs can be given in their entirity. Pointers to the literature are given for more sophisticated versions of these algorithms.
Sample compression, learnability, and the VapnikChervonenkis dimension
 MACHINE LEARNING
, 1995
"... Within the framework of paclearning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function r ..."
Abstract

Cited by 69 (5 self)
 Add to MetaCart
Within the framework of paclearning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function receives a finite sample set consistent with some concept in C and chooses a subset of k examples as the compression set. The reconstruction function forms a hypothesis on X from a compression set of k examples. For any sample set of a concept in C the compression set produced by the compression function must lead to a hypothesis consistent with the whole original sample set when it is fed to the reconstruction function. We demonstrate that the existence of a sample compression scheme of fixedsize for a class C is sufficient to ensure that the class C is paclearnable. Previous work has shown that a class is paclearnable if and only if the VapnikChervonenkis (VC) dimension of the class i...
Characterizing PAClearnability of Semilinear Sets
 Inform. and Comput
, 1998
"... The learnability of the class of lettercounts of regular languages (semilinear sets) and other related classes of subsets of N d with respect to the distributionfree learning model of Valiant (PAClearning model) is characterized. Using the notion of reducibility among learning problems due to P ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
The learnability of the class of lettercounts of regular languages (semilinear sets) and other related classes of subsets of N d with respect to the distributionfree learning model of Valiant (PAClearning model) is characterized. Using the notion of reducibility among learning problems due to Pitt and Warmuth called "prediction preserving reducibility," and a special case thereof, a number of positive and partially negative results are obtained. On the positive side the class of semilinear sets of dimension 1 or 2 is shown to be learnable when the integers are encoded in unary. On the neutral to negative side it is shown that when the integers are encoded in binary the learning problem for semilinear sets as well as a class of subsets of Z d much simpler than semilinear sets is as hard as learning DNF, a central open problem in the field. A number of hardness results for related learning problems are also given. 3 Most of the research reported herein was conducted while the aut...
Part 1: Overview of the Probably Approximately Correct (PAC) Learning Framework
, 1995
"... Here we survey some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We then c ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Here we survey some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We then consider some criticisms of the PAC model and the extensions proposed to address these criticisms. Finally, we look briefly at other models recently proposed in computational learning theory.
Projective DNF formulae and their revision
 In Learning Theory and Kernel Machines, 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003
"... Valiant argued that biology imposes various constraints on learnability, and, motivated by these constraints, introduced his model of projection learning [14]. Projection learning aims to learn a target concept over some large domain, in this paper {0, 1} n, by learning some of its projections to a ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Valiant argued that biology imposes various constraints on learnability, and, motivated by these constraints, introduced his model of projection learning [14]. Projection learning aims to learn a target concept over some large domain, in this paper {0, 1} n, by learning some of its projections to a class of smaller domains,
Learning What’s Going on: Reconstructing Preferences and Priorities from Opaque Transactions
"... We consider a setting where n buyers, with combinatorial preferences over m items, and a seller, running a prioritybased allocation mechanism, repeatedly interact. Our goal, from observing limited information about the results of these interactions, is to reconstruct both the preferences of the buy ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider a setting where n buyers, with combinatorial preferences over m items, and a seller, running a prioritybased allocation mechanism, repeatedly interact. Our goal, from observing limited information about the results of these interactions, is to reconstruct both the preferences of the buyers and the mechanism of the seller. More specifically, we consider an online setting where at each stage, a subset of the buyers arrive and are allocated items, according to some unknown priority that the seller has among the buyers. Our learning algorithm observes only which buyers arrive and the allocation produced (or some function of the allocation, such as just which buyers received positive utility and which did not), and its goal is to predict the outcome for future subsets of buyers. For this task, the learning algorithm needs to reconstruct both the priority among the buyers and the preferences of each buyer. We derive mistake bound algorithms for additive, unitdemand and single minded buyers. We also consider the case where buyers ’ utilities for a fixed bundle can change between stages due to different (observed) prices. Our algorithms are efficient both in computation time and in the maximum number of mistakes (both polynomial in the number of buyers and items).
Journal of Machine Learning Research 3 (2002) 271301 Submitted 11/01; Revised 7/02; Published 10/02 On Online Learning of Decision Lists
 Journal of Machine Learning Research
, 2003
"... A fundamental open problem in computational learning theory is whether there is an attribute e#cient learning algorithm for the concept class of decision lists (Rivest, 1987; Blum, 1996). We consider a weaker problem, where the concept class is restricted to decision lists with D alternations. Fo ..."
Abstract
 Add to MetaCart
A fundamental open problem in computational learning theory is whether there is an attribute e#cient learning algorithm for the concept class of decision lists (Rivest, 1987; Blum, 1996). We consider a weaker problem, where the concept class is restricted to decision lists with D alternations. For this class, we present a novel online algorithm that achieves a mistake bound of O(r log n), where r is the number of relevant variables, and n is the total number of variables. The algorithm can be viewed as a strict generalization of the famous Winnow algorithm by Littlestone (1988), and improves the O(r log n) mistake bound of Balanced Winnow. Our bound is stronger than a similar PAClearning result of Dhagat and Hellerstein (1994). A combination of our algorithm with the algorithm suggested by Rivest (1987) might achieve even better bounds.
unknown title
"... This study presents an exemplarbased nested hyperrectangle learning model (NHLM) which is an efficient and accurate supervised classification model. The proposed model is based on the concept of seeding training data in the Euclidean mspace (where m denotes the number of features) as hyperrectan ..."
Abstract
 Add to MetaCart
This study presents an exemplarbased nested hyperrectangle learning model (NHLM) which is an efficient and accurate supervised classification model. The proposed model is based on the concept of seeding training data in the Euclidean mspace (where m denotes the number of features) as hyperrectangles. To express the exceptions, these hyperrectangles may be nested inside one another to an arbitrary depth. The fast and oneshot learning procedures can adjust weights dynamically when new examples are added. Furthermore, the “second chance ” heuristic is introduced in NHLM to avoid creating more memory objects than necessary. NHLM is applied to solving the land cover classification problem in Taiwan using remote sensed imagery. The study investigated five land cover classes and clouds. These six classes were chosen from field investigation of the study area according to previous study. Therefore, this paper aims to produce a land cover classification based on SPOT HRV spectral data. Compared with a standard backpropagation neural network (BPN), the experimental results indicate that NHLM provides a powerful tool for categorizing remote sensing data.