Results 1  10
of
11
The Power of Vacillation in Language Learning
, 1992
"... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are ..."
Abstract

Cited by 44 (11 self)
 Add to MetaCart
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff...
The synthesis of language learners
 Information and Computation
, 1999
"... An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family. Studied is the metaprobl ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family. Studied is the metaproblem of synthesizing from indices for r.e. classes and for indexed families of languages various kinds of languagelearners for the corresponding classes or families indexed. Many positive results, as well as some negative results, are presented regarding the existence of such synthesizers. The negative results essentially provide lower bounds for the positive results. The proofs of some of the positive results yield, as pleasant corollaries, subsetprinciple or telltale style characterizations for the learnability of the corresponding classes or families indexed. For example, the indexed families of recursive languages that can be behaviorally correctly identified from positive data are surprisingly characterized by Angluinâ€™s (1980b) Condition 2 (the subset principle for circumventing overgeneralization). 1
Vacillatory and BC Learning on Noisy Data
 Theoretical Computer Science A
, 1996
"... this paper considers r.e. subsets L of N . We write ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
this paper considers r.e. subsets L of N . We write
Trees and Learning
 Proceedings of the Ninth Conference on Computational Learning Theory (COLT) ACMPress
, 1996
"... We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of i ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of isolated branches on a single tree. We discuss learning with additional information where the learner receives an index for a strongly recursive tree such that the function to be learned is isolated on this tree. We show that EXlearning with this type of additional information is strictly more powerful than EXlearning. 1 Introduction Inductive inference [1, 2, 4, 6, 10] deals with learning classes of recursive functions in the limit under certain convergence constraints. The most general setting is that of behaviorally correct learning (BC): for each prefix f(0)f(1) : : : f(n) of the recursive function f , the learner guesses a program for f ; the learner succeeds if UniversitÂ¨at Heidel...
On Theory Revision with Queries
 In Proc. 12th Annu. Conf. on Comput. Learning Theory
, 1999
"... The theory revision, or concept revision, problem is to correct a given, roughly correct concept. Given the representation of an initial concept, one would like to obtain a representation of the target concept by applying revisions, that is, syntactic modifications such as the deletion of a var ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The theory revision, or concept revision, problem is to correct a given, roughly correct concept. Given the representation of an initial concept, one would like to obtain a representation of the target concept by applying revisions, that is, syntactic modifications such as the deletion of a variable or a term. We give efficient revision algorithms using membership and equivalence queries for 2term monotone DNF, monotone kDNF, and readonce formulas. An example is given showing that some monotone DNF formulas cannot be revised efficiently. These results all assume that the revisions allowed are the replacements of a variable occurrence with a constant, which, for DNFs, corresponds to deletions of variables and terms. We also discuss a more general error model where besides deletions, additions are also allowed. 1 INTRODUCTION What the computational learning theory community calls a concept is often referred to as a theory elsewhere in artificial intelligence and logic....
The Power of Frequency Computation (Extended Abstract)
 In: Proceedings FCT'95, Lecture Notes in Computer Science
, 1995
"... ) Martin Kummer and Frank Stephan ? Universitat Karlsruhe, Institut fur Logik, Komplexitat und Deduktionssysteme, D76128 Karlsruhe, Germany. fkummer; fstephang@ira.uka.de Abstract. The notion of frequency computation concerns approximative computations of n distinct parallel queries to a set A. A ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
) Martin Kummer and Frank Stephan ? Universitat Karlsruhe, Institut fur Logik, Komplexitat und Deduktionssysteme, D76128 Karlsruhe, Germany. fkummer; fstephang@ira.uka.de Abstract. The notion of frequency computation concerns approximative computations of n distinct parallel queries to a set A. A is called (m; n)recursive if there is an algorithm which answers any n distinct parallel queries to A such that at least m answers are correct. This paper gives natural combinatorial characterizations of the fundamental inclusion problem, namely the question for which choices of the parameters m; n; m 0 ; n 0 , every (m;n)recursive set is (m 0 ; n 0 )recursive. We also characterize the inclusion problem restricted to recursively enumerable sets and the inclusion problem for the polynomialtime bounded version of frequency computation. Furthermore, using these characterizations we obtain many explicit inclusions and noninclusions. 1 Introduction Frequency computation is a classic...
Learning to Win ProcessControl Games Watching GameMasters
 Information and Computation
, 2002
"... . The present paper focuses on some interesting classes of processcontrol games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winningstrategy. In this paper we investigate situations, in which even a complete model ( ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. The present paper focuses on some interesting classes of processcontrol games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winningstrategy. In this paper we investigate situations, in which even a complete model (given by a program) of a particular game does not provide enough information to synthesize  even in the limit  a winning strategy. However, if in addition to getting a program, a machine may also watch masters play winning strategies, then the machine is able to learn in the limit a winning strategy for the given game. Studied are successful learning from arbitrary masters and from pedagogically useful selected masters. It is shown that selected masters are strictly more helpful for learning than are arbitrary masters. Both for learning from arbitrary masters and for learning from selected masters, though, there are cases where one can learn programs for winning strategies from master...
Learning From Context Without CodingTricks
"... Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system additional related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Ve ..."
Abstract
 Add to MetaCart
Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system additional related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapillai, and Wiehagen have provided mathematical justification for this phenomenon in the inductive inference framework. However, their proofs rely heavily on selfreferential coding tricks, that is, they directly code the solution of the learning problem into the context. In this work we prove, in the inductive inference setting, that multitask learning is extremely powerful even without using obvious coding tricks. Coding tricks are avoided in the powerful sense of Fulk's notion of robust learning. Also, studied is the difficulty of the functional dependence between the intended target tasks and useful associated contexts. Department of CIS, University of Delaware, Newark, DE 19716,...