Results 1  10
of
84,692
Boosting a Weak Learning Algorithm By Majority
, 1995
"... We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas pr ..."
Abstract

Cited by 509 (15 self)
 Add to MetaCart
presented by Schapire in his paper "The strength of weak learnability", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general
Combining labeled and unlabeled data with cotraining
, 1998
"... We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in which the description of each example can be partitioned into two distinct views, motivated by the ta ..."
Abstract

Cited by 1602 (28 self)
 Add to MetaCart
algorithm's predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PACstyle analysis for this setting, and, more broadly, a PACstyle framework for the general problem of learning from both labeled and unlabeled data. We also
Cryptographic Limitations on Learning Boolean Formulae and Finite Automata
 PROCEEDINGS OF THE TWENTYFIRST ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING
, 1989
"... In this paper we prove the intractability of learning several classes of Boolean functions in the distributionfree model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntact ..."
Abstract

Cited by 342 (14 self)
 Add to MetaCart
In this paper we prove the intractability of learning several classes of Boolean functions in the distributionfree model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless
Learning With Many Irrelevant Features
 In Proceedings of the Ninth National Conference on Artificial Intelligence
, 1991
"... In many domains, an appropriate inductive bias is the MINFEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This paper defines and studies this bias. First, it is shown that any learning algorithm implementing the MINFEATURES bias requires \Theta( 1 ff ..."
Abstract

Cited by 250 (4 self)
 Add to MetaCart
ffl ln 1 ffi + 1 ffl [2 p + p ln n]) training examples to guarantee PAClearning a concept having p relevant features out of n available features. This bound is only logarithmic in the number of irrelevant features. The paper also presents a quasipolynomial time algorithm, FOCUS, which
Query, PACS and simplePAC Learning
, 1998
"... We study a distribution dependent form of PAC learning that uses probability distributions related to Kolmogorov complexity. We relate the PACS model, defined by Denis, D'Halluin and Gilleron in [3], with the standard simplePAC model and give a general technique that subsumes the results i ..."
Abstract
 Add to MetaCart
We study a distribution dependent form of PAC learning that uses probability distributions related to Kolmogorov complexity. We relate the PACS model, defined by Denis, D'Halluin and Gilleron in [3], with the standard simplePAC model and give a general technique that subsumes the results
PAC Associative Reinforcement Learning
, 1995
"... General algorithms for the reinforcement learning problem typically learn policies in the form of a table that directly maps the states of the environment into actions. When the statespace is large these methods become impractical. One approach to increase efficiency is to restrict the class of pol ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
of policies by considering only policies that can be described using some fixed representation. This paper pursues this approach and analyzes the associative reinforcement learning problem in the PAC learning framework. As a representation, we use a general form of decision lists that can describe a wide
The Relationship between PAC, the Statistical Physics framework, the Bayesian framework, and the VC framework
"... This paper discusses the intimate relationships between the supervised learning frameworks mentioned in the title. In particular, it shows how all those frameworks can be viewed as particular instances of a single overarching formalism. In doing this many commonly misunderstood aspects of those fram ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
This paper discusses the intimate relationships between the supervised learning frameworks mentioned in the title. In particular, it shows how all those frameworks can be viewed as particular instances of a single overarching formalism. In doing this many commonly misunderstood aspects of those
PAC learning with positive examples
, 1998
"... Learning with positive examples only occurs very frequently in natural learning. But learning theories have always encountered a lot of difficulties with these situations. While learning with positive examples has been extensively studied in Gold framework, it does not exist satisfactory generalizat ..."
Abstract
 Add to MetaCart
Learning with positive examples only occurs very frequently in natural learning. But learning theories have always encountered a lot of difficulties with these situations. While learning with positive examples has been extensively studied in Gold framework, it does not exist satisfactory
Scalesensitive Dimensions, Uniform Convergence, and Learnability
, 1997
"... Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distributionfree convergence property of means to expectations uniformly over classes of random variables. Classes of realvalued functions ..."
Abstract

Cited by 238 (2 self)
 Add to MetaCart
Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distributionfree convergence property of means to expectations uniformly over classes of random variables. Classes of realvalued functions
Extension of the PAC Framework to Finite and Countable Markov Chains
 In Proceedings of the 12th Annual Conference on Computational Learning Theory
, 2000
"... We consider a model of learning in which the successive observations follow a certain Markov chain. The observations are labeled according to a membership to some unknown target set. For a Markov chain with finitely many states we show that, if the target set belongs to a family of sets with a finit ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
finite VC dimension, then probably approximately correct learning of this set is possible with polynomially large samples. Specifically for observations following a random walk with a state space X and uniform stationary distribution, the sample size required is no more than\Omega i t 0 1\Gamma 2 log
Results 1  10
of
84,692