Results 1  10
of
34
On the Complexity of Teaching
 Journal of Computer and System Sciences
, 1992
"... While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching . In this paper we study the complexity of teaching by considering a variant of the online learning model in which a ..."
Abstract

Cited by 103 (2 self)
 Add to MetaCart
(Show Context)
While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching . In this paper we study the complexity of teaching by considering a variant of the online learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension. Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class. A preliminary version of this paper appeared in the Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 303314. August 1991. Most of this research was carried out while both authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL0386K0171, DARPA Contract N0001489J1988, NSF Gr...
Structure Identification in Relational Data
, 1997
"... This paper presents several investigations into the prospects for identifying meaningful structures in empirical data, namely, structures permitting effective organization of the data to meet requirements of future queries. We propose a general framework whereby the notion of identifiability is give ..."
Abstract

Cited by 78 (2 self)
 Add to MetaCart
This paper presents several investigations into the prospects for identifying meaningful structures in empirical data, namely, structures permitting effective organization of the data to meet requirements of future queries. We propose a general framework whereby the notion of identifiability is given a precise formal definition similar to that of learnability. Using this framework, we then explore if a tractable procedure exists for deciding whether a given relation is decomposable into a constraint network or a CNF theory with desirable topology and, if the answer is positive, identifying the desired decomposition. Finally, we
Learning Simple Concepts Under Simple Distributions
 SIAM JOURNAL OF COMPUTING
, 1991
"... We aim at developing a learning theory where `simple' concepts are easily learnable. In Valiant's learning model, many concepts turn out to be too hard (like NP hard) to learn. Relatively few concept classes were shown to be learnable polynomially. In daily life, it seems that things we ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
We aim at developing a learning theory where `simple' concepts are easily learnable. In Valiant's learning model, many concepts turn out to be too hard (like NP hard) to learn. Relatively few concept classes were shown to be learnable polynomially. In daily life, it seems that things we care to learn are usually learnable. To model the intuitive notion of learning more closely, we do not require that the learning algorithm learns (polynomially) under all distributions, but only under all simple distributions. A distribution is simple if it is dominated by an enumerable distrib...
PAC Learning from Positive Statistical Queries
 Proc. 9th International Conference on Algorithmic Learning Theory  ALT ’98
, 1998
"... . Learning from positive examples occurs very frequently in natural learning. The PAC learning model of Valiant takes many features of natural learning into account, but in most cases it fails to describe such kind of learning. We show that in order to make the learning from positive data possible, ..."
Abstract

Cited by 43 (3 self)
 Add to MetaCart
(Show Context)
. Learning from positive examples occurs very frequently in natural learning. The PAC learning model of Valiant takes many features of natural learning into account, but in most cases it fails to describe such kind of learning. We show that in order to make the learning from positive data possible, extrainformation about the underlying distribution must be provided to the learner. We define a PAC learning model from positive and unlabeled examples. We also define a PAC learning model from positive and unlabeled statistical queries. Relations with PAC model ([Val84]), statistical query model ([Kea93]) and constantpartition classification noise model ([Dec97]) are studied. We show that kDNF and kdecision lists are learnable in both models, i.e. with far less information than it is assumed in previously used algorithms. 1 Introduction The PAC learning model of Valiant ([Val84]) has become the reference model in computational learning theory. However, in spite of the importance of lea...
Teaching a Smarter Learner
 Journal of Computer and System Sciences
, 1994
"... We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the nonintuitive aspects of other models in which the teacher must successfully teach ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
(Show Context)
We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the nonintuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identified by a deterministic polynomialtime algorithm with access to a very rich set of examplebased queries is teachable by a computationally unbounded teacher and a polynomialtime learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomialtime algorithms and describe teacher/learner pairs for the classes of 1decision lists and Horn sentences. 1 Introduction Recently, there has been interest in developing formal models of teaching [4, 10, ...
The Learnability of Description Logics with Equality Constraints
 Machine Learning
, 1994
"... Although there is an increasing amount of experimental research on learning concepts expressed in firstorder logic, there are still relatively few formal results on the polynomial learnability of firstorder representations from examples. Most previous analyses in the pacmodel have focused on s ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
Although there is an increasing amount of experimental research on learning concepts expressed in firstorder logic, there are still relatively few formal results on the polynomial learnability of firstorder representations from examples. Most previous analyses in the pacmodel have focused on subsets of Prolog, and only a few highly restricted subsets have been shown to be learnable. In this paper, we will study instead the learnability of the restricted firstorder logics known as "description logics", also sometimes called "terminological logics" or "KLONEtype languages". Description logics are also subsets of predicate calculus, but are expressed using a different syntax, allowing a different set of syntactic restrictions to be explored. We first define a simple description logic, summarize some results on its expressive power, and then analyze its learnability. It is shown that the full logic cannot be tractably learned. However, syntactic restrictions exist that enable tractable learning from positive examples alone, independent of the size of the vocabulary used to describe examples. The learnable sublanguage appears to be incomparable in expressive power to any subset of firstorder logic previously known to be learnable.
A Computational Model of Teaching
 In Proceedings of the Fifth Annual Workshop on Computational Learning Theory
, 1992
"... Goldman and Kearns [GK91] recently introduced a notionof the teaching dimensionof a concept class. The teaching dimension is intended to capture the combinatorial difficulty of teaching a concept class. We present a computational analog which allows us to make statements about boundedcomplexity tea ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
Goldman and Kearns [GK91] recently introduced a notionof the teaching dimensionof a concept class. The teaching dimension is intended to capture the combinatorial difficulty of teaching a concept class. We present a computational analog which allows us to make statements about boundedcomplexity teachers and learners, and we extend the model by incorporating trusted information. Under this extended model, we modify algorithms for learning several expressive classes in the exact identification model of Angluin [Ang88]. We study the relationships between variants of these models, and also touch on a relationship with distributionfree learning. 1 INTRODUCTION In the eight years since Valiant's seminal paper on learnability was published [Val84], computational learning theory has been an active and productive field. Several different learning models have been proposed, each attempting to model a different aspect of learning. Many of these models envision a teacher who interacts in some w...
A Formal Framework for Speedup Learning from Problems and Solutions
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1996
"... Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned know ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macrooperators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanationbased speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macrooperator learning, ExplanationBased Learning (EBL), and Probably Approximately Correct (PAC) Learning.
On learning from exercises
 In Annual Workshop on Computational Learning Theory
, 1989
"... This paper explores a new direction in the formal theory of learning learning in the sense of improving computational efficiency as opposed to concept learning in the sense of Valiar'. Specifically, the paper concerns algorithms that learn to solve problems from sample instances ot the problem ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
This paper explores a new direction in the formal theory of learning learning in the sense of improving computational efficiency as opposed to concept learning in the sense of Valiar'. Specifically, the paper concerns algorithms that learn to solve problems from sample instances ot the problems. We develop a general framework for such learning and study the framework over two distinct random sources of sample instances. The first source provides sample instances together with their solutions, while the second source provides unsolved instances or "exercises". We prove two theorems identifying conditions sufficient for learning over the two sources, our proofs being constructive in that they exhibit learning algorithms. To illustrate the scope of our results, we discuss their application to a program that learns to solve restricted classes of symbolic integrals
Exact Learning of Tree Patterns from Queries and Counterexamples
 In Proc. COLT'98, ACM
, 1998
"... We consider learning tree patterns from queries. The instances are ordered and unordered trees with nodes labeled by constant identifiers. The concepts are tree patterns and unions of tree patterns (forests) where all the internal nodes are labeled with constants and the leaves are labeled with cons ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We consider learning tree patterns from queries. The instances are ordered and unordered trees with nodes labeled by constant identifiers. The concepts are tree patterns and unions of tree patterns (forests) where all the internal nodes are labeled with constants and the leaves are labeled with constants or variables. A tree pattern matches any tree with its variables replaced with constant subtrees. We show that ordered trees, in which the children are matched in a strict lefttoright order, are exactly learnable from equivalence queries, while ordered forests are learnable from equivalence and membership queries. Unordered trees are exactly learnable from superset queries, and unordered forests are learnable from superset and equivalence queries. Negatively, we also show that each of the query types used is necessary for learning each concept class. 1 INTRODUCTION A large part of computational learning theory is devoted to learning concepts over instances represented as attribute v...