Results 1  10
of
79
Combining labeled and unlabeled data with cotraining
, 1998
"... We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in which the description of each example can be partitioned into two distinct views, motivated by the ta ..."
Abstract

Cited by 1239 (28 self)
 Add to MetaCart
We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in which the description of each example can be partitioned into two distinct views, motivated by the task of learning to classify web pages. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks that point to that page. We assume that either view of the example would be su cient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment amuch smaller set of labeled examples. Speci cally, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm's predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PACstyle analysis for this setting, and, more broadly, a PACstyle framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real webpage data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice. As part of our analysis, we provide new re
The Extraction of Refined Rules from KnowledgeBased Neural Networks
 Machine Learning
, 1993
"... Neural networks, despite their empiricallyproven abilities, have been little used for the refinement of existing knowledge because this task requires a threestep process. First, knowledge in some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge mus ..."
Abstract

Cited by 196 (4 self)
 Add to MetaCart
Neural networks, despite their empiricallyproven abilities, have been little used for the refinement of existing knowledge because this task requires a threestep process. First, knowledge in some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge must be extracted from the network. We have previously described a method for the first step of this process. Standard neural learning techniques can accomplish the second step. In this paper, we propose and empirically evaluate a method for the final, and possibly most difficult, step. This method efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules: (1) closely reproduce (and can even exceed) the accuracy of the network from which they are extracted; (2) are superior to the rules produced by methods that directly refine symbolic rules; (3) are superior to those produced by previous techniques fo...
New Methods for Competitive Coevolution
 Evolutionary Computation
, 1996
"... We consider "competitive coevolution," in which fitness is based on direct competition among individuals selected from two independently evolving populations of "hosts" and "parasites." Competitive coevolution can lead to an "arms race," in which the two populations reciprocally drive one another to ..."
Abstract

Cited by 121 (3 self)
 Add to MetaCart
We consider "competitive coevolution," in which fitness is based on direct competition among individuals selected from two independently evolving populations of "hosts" and "parasites." Competitive coevolution can lead to an "arms race," in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3D TicTacToe as test problems to explore three new techniques in competitive coevolution. "Competitive fitness sharing" changes the way fitness is measured, "shared sampling" provides a method for selecting a strong, diverse set of parasites, and the "hall of fame" encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods, and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, a...
Semisupervised Clustering with User Feedback
, 2003
"... We present a new approach to clustering based on the observation that \it is easier to criticize than to construct." Our approach of semisupervised clustering allows a user to iteratively provide feedback to a clustering algorithm. The feedback is incorporated in the form of constraints which ..."
Abstract

Cited by 99 (2 self)
 Add to MetaCart
We present a new approach to clustering based on the observation that \it is easier to criticize than to construct." Our approach of semisupervised clustering allows a user to iteratively provide feedback to a clustering algorithm. The feedback is incorporated in the form of constraints which the clustering algorithm attempts to satisfy on future iterations. These constraints allow the user to guide the clusterer towards clusterings of the data that the user nds more useful. We demonstrate semisupervised clustering with a system that learns to cluster news stories from a Reuters data set. Introduction Consider the following problem: you are given 100,000 text documents (e.g., papers, newsgroup articles, or web pages) and asked to group them into classes or into a hierarchy such that related documents are grouped together. You are not told what classes or hierarchy to use or what documents are related; you have some criteria in mind, but may not be able to say exactly w...
Coarse sample complexity bounds for active learning
 In Neural Information Processing Systems
, 2005
"... ..."
How Many Queries are Needed to Learn?
, 1996
"... We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a polynomial number of polynomial sized queries in this model. We give applications of this characterization, i ..."
Abstract

Cited by 63 (8 self)
 Add to MetaCart
We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a polynomial number of polynomial sized queries in this model. We give applications of this characterization, including results on learning a natural subclass of DNF formulas, and on learning with membership queries alone. Query complexity has previously been used to prove lower bounds on the time complexity of exact learning. We show a new relationship between query complexity and time complexity in exact learning: If any "honest" class is exactly and properly learnable with polynomial query complexity, but not learnable in polynomial time, then P<F NaN> 6= NP. In particular, we show that an honest class is exactly polynomialquery learnable if and only if it is learnable using an oracle for \Sigma p 4 . 1 Introduction Today concept learning is studied under two rigorous frameworks which model t...
Finiteness Results for Sigmoidal "Neural" Networks
 In Proceedings of 25th Annual ACM Symposium on the Theory of Computing
, 1993
"... ) Angus Macintyre Mathematical Inst., University of Oxford Oxford OX1 3LB, England, UK Email: ajm@maths.ox.ac.uk Eduardo D. Sontag 3 Dept. of Mathematics, Rutgers University New Brunswick, NJ 08903 Email: sontag@hilbert.rutgers.edu Abstract Proc. 25th Annual Symp. Theory Computing , San Diego, ..."
Abstract

Cited by 44 (12 self)
 Add to MetaCart
) Angus Macintyre Mathematical Inst., University of Oxford Oxford OX1 3LB, England, UK Email: ajm@maths.ox.ac.uk Eduardo D. Sontag 3 Dept. of Mathematics, Rutgers University New Brunswick, NJ 08903 Email: sontag@hilbert.rutgers.edu Abstract Proc. 25th Annual Symp. Theory Computing , San Diego, May 1993 This paper deals with analog circuits. It establishes the finiteness of VC dimension, teaching dimension, and several other measures of sample complexity which arise in learning theory. It also shows that the equivalence of behaviors, and the loading problem, are effectively decidable, modulo a widely believed conjecture in number theory. The results, the first ones that are independent of weight size, apply when the gate function is the "standard sigmoid" commonly used in neural networks research. The proofs rely on very recent developments in the elementary theory of real numbers with exponentiation. (Some weaker conclusions are also given for more general analytic gate functions...
Teaching a Smarter Learner
 Journal of Computer and System Sciences
, 1994
"... We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the nonintuitive aspects of other models in which the teacher must successfully teach ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the nonintuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identified by a deterministic polynomialtime algorithm with access to a very rich set of examplebased queries is teachable by a computationally unbounded teacher and a polynomialtime learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomialtime algorithms and describe teacher/learner pairs for the classes of 1decision lists and Horn sentences. 1 Introduction Recently, there has been interest in developing formal models of teaching [4, 10, ...