Results 1  10
of
81
Improving generalization with active learning
 Machine Learning
, 1994
"... Abstract. Active learning differs from "learning from examples " in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples ..."
Abstract

Cited by 436 (1 self)
 Add to MetaCart
Abstract. Active learning differs from &quot;learning from examples &quot; in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples. In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers &quot;useful. &quot; We test our implementation, called an SGnetwork, on three domains and observe significant improvement in generalization.
Solving the multipleinstance problem with axisparallel rectangles
 Artificial Intelligence
, 1997
"... ..."
(Show Context)
Active learning literature survey
, 2010
"... The key idea behind active learning is that a machine learning algorithm can achieve greater accuracy with fewer labeled training instances if it is allowed to choose the data from which is learns. An active learner may ask queries in the form of unlabeled instances to be labeled by an oracle (e.g., ..."
Abstract

Cited by 152 (1 self)
 Add to MetaCart
The key idea behind active learning is that a machine learning algorithm can achieve greater accuracy with fewer labeled training instances if it is allowed to choose the data from which is learns. An active learner may ask queries in the form of unlabeled instances to be labeled by an oracle (e.g., a human annotator). Active learning is wellmotivated in many modern machine learning problems, where unlabeled data may be abundant but labels are difficult, timeconsuming, or expensive to obtain. This report provides a general introduction to active learning and a survey of the literature. This includes a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date. An analysis of the empirical and theoretical evidence for active learning, a summary of several problem setting variants, and a discussion
Learning conjunctions of Horn clauses
 In Proceedings of the 31st Annual Symposium on Foundations of Computer Science
, 1990
"... Abstract. An algorithm is presented for learning the class of Boolean formulas that are expressible as conjunctions of Horn clauses. (A Horn clause is a disjunction of literals, all but at most one of which is a negated variable.) The algorithm uses equivalence queries and membership queries to prod ..."
Abstract

Cited by 111 (13 self)
 Add to MetaCart
Abstract. An algorithm is presented for learning the class of Boolean formulas that are expressible as conjunctions of Horn clauses. (A Horn clause is a disjunction of literals, all but at most one of which is a negated variable.) The algorithm uses equivalence queries and membership queries to produce a formula that is logically equivalent to the unknown formula to be learned. The amount of time used by the algorithm is polynomial in the number of variables and the number of clauses in the unknown formula.
Learning the CLASSIC Description Logic: Theoretical and Experimental Results
 In Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference (KR94
, 1994
"... We present a series of theoretical and experimental results on the learnability of description logics. We first extend previous formal learnability results on simple description logics to CClassic, a description logic expressive enough to be practically useful. We then experimentally evaluate two e ..."
Abstract

Cited by 95 (7 self)
 Add to MetaCart
We present a series of theoretical and experimental results on the learnability of description logics. We first extend previous formal learnability results on simple description logics to CClassic, a description logic expressive enough to be practically useful. We then experimentally evaluate two extensions of a learning algorithm suggested by the formal analysis. The first extension learns CClassic descriptions from individuals. (The formal results assume that examples are themselves descriptions.) The second extension learns disjunctions of CClassic descriptions from individuals. The experiments, which were conducted using several hundred target concepts from a number of domains, indicate that both extensions reliably learn complex natural concepts. 1 INTRODUCTION One wellknown family of formalisms for representing knowledge are description logics, sometimes also called terminological logics or KLONEtype languages. Description logics have been applied in a number of contexts...
Inductive Inference, DFAs and Computational Complexity
 2nd Int. Workshop on Analogical and Inductive Inference (AII
, 1989
"... This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1 ..."
Abstract

Cited by 83 (1 self)
 Add to MetaCart
(Show Context)
This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1
Learning to Take Actions
, 1998
"... We formalize a model for supervised learning of action strategies in dynamic stochastic domains and show that PAClearning results on Occam algorithms hold in this model as well. We then identify a class of rulebased action strategies for which polynomial time learning is possible. The representati ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
(Show Context)
We formalize a model for supervised learning of action strategies in dynamic stochastic domains and show that PAClearning results on Occam algorithms hold in this model as well. We then identify a class of rulebased action strategies for which polynomial time learning is possible. The representation of strategies is a generalization of decision lists; strategies include rules with existentially quantified conditions, simple recursive predicates, and small internal state, but are syntactically restricted. We also study the learnability of hierarchically composed strategies where a subroutine already acquired can be used as a basic action in a higher level strategy. We prove some positive results in this setting, but also show that in some cases the hierarchical learning problem is computationally hard. 1 Introduction We formalize a model for supervised learning of action strategies in dynamic stochastic domains, and study the learnability of strategies represented by rulebased syste...
Probably Approximately Correct Learning
 Proceedings of the Eighth National Conference on Artificial Intelligence
, 1990
"... This paper surveys some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We th ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
(Show Context)
This paper surveys some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. We define this learning model and then look at some of the results obtained in it. We then consider some criticisms of the PAC model and the extensions proposed to address these criticisms. Finally, we look briefly at other models recently proposed in computational learning theory. 2 Introduction It's a dangerous thing to try to formalize an enterprise as complex and varied as machine learning so that it can be subjected to rigorous mathematical analysis. To be tractable, a formal model must be simple. Thus, inevitably, most people will feel that important aspects of the activity have been left out of the theory. Of course, they will be right. Therefore, it is not advisable to present a theory of machine learning as having reduced the entire field to its bare essentials. All ...
The Learnability of Description Logics with Equality Constraints
 Machine Learning
, 1994
"... Although there is an increasing amount of experimental research on learning concepts expressed in firstorder logic, there are still relatively few formal results on the polynomial learnability of firstorder representations from examples. Most previous analyses in the pacmodel have focused on s ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
(Show Context)
Although there is an increasing amount of experimental research on learning concepts expressed in firstorder logic, there are still relatively few formal results on the polynomial learnability of firstorder representations from examples. Most previous analyses in the pacmodel have focused on subsets of Prolog, and only a few highly restricted subsets have been shown to be learnable. In this paper, we will study instead the learnability of the restricted firstorder logics known as "description logics", also sometimes called "terminological logics" or "KLONEtype languages". Description logics are also subsets of predicate calculus, but are expressed using a different syntax, allowing a different set of syntactic restrictions to be explored. We first define a simple description logic, summarize some results on its expressive power, and then analyze its learnability. It is shown that the full logic cannot be tractably learned. However, syntactic restrictions exist that enable tractable learning from positive examples alone, independent of the size of the vocabulary used to describe examples. The learnable sublanguage appears to be incomparable in expressive power to any subset of firstorder logic previously known to be learnable.
A neuroidal architecture for cognitive computation
 Journal of the ACM
, 2000
"... Abstract. An architecture is described for designing systems that acquire and manipulate large amounts of unsystematized, or socalled commonsense, knowledge. Its aim is to exploit to the full those aspects of computational learning that are known to offer powerful solutions in the acquisition and m ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
Abstract. An architecture is described for designing systems that acquire and manipulate large amounts of unsystematized, or socalled commonsense, knowledge. Its aim is to exploit to the full those aspects of computational learning that are known to offer powerful solutions in the acquisition and maintenance of robust knowledge bases. The architecture makes explicit the requirements on the basic computational tasks that are to be performed and is designed to make these computationally tractable even for very large databases. The main claims are that (i) the basic learning and deduction tasks are provably tractable and (ii) tractable learning offers viable approaches to a range of issues that have been previously identified as problematic for artificial intelligence systems that are programmed. Among the issues that learning offers to resolve are robustness to inconsistencies, robustness to incomplete information and resolving among alternatives. Attributeefficient learning algorithms, which allow learning from few examples in large dimensional systems, are fundamental to the approach. Underpinning the overall architecture is a new principled approach to manipulating relations in learning systems. This approach, of independently quantified arguments, allows propositional learning algorithms to be applied systematically to learning relational concepts in polynomial time and in a modular fashion.