Results 1  10
of
17
The Power of Vacillation in Language Learning
, 1992
"... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there ..."
Abstract

Cited by 46 (13 self)
 Add to MetaCart
(Show Context)
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff...
The synthesis of language learners
 Information and Computation
, 1999
"... An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family. Studied is the metaprobl ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family. Studied is the metaproblem of synthesizing from indices for r.e. classes and for indexed families of languages various kinds of languagelearners for the corresponding classes or families indexed. Many positive results, as well as some negative results, are presented regarding the existence of such synthesizers. The negative results essentially provide lower bounds for the positive results. The proofs of some of the positive results yield, as pleasant corollaries, subsetprinciple or telltale style characterizations for the learnability of the corresponding classes or families indexed. For example, the indexed families of recursive languages that can be behaviorally correctly identified from positive data are surprisingly characterized by Angluin’s (1980b) Condition 2 (the subset principle for circumventing overgeneralization). 1
Vacillatory and BC Learning on Noisy Data
, 2007
"... The present work employs a model of noise introduced earlier by the third author. In this model noisy data nonetheless uniquely determines the true data: correct information occurs infinitely often while incorrect information occurs only finitely often. The present paper considers the effects of thi ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
The present work employs a model of noise introduced earlier by the third author. In this model noisy data nonetheless uniquely determines the true data: correct information occurs infinitely often while incorrect information occurs only finitely often. The present paper considers the effects of this form of noise on vacillatory and behaviorally correct learning of grammars — both from positive data alone and from informant (positive and negative data). For learning from informant, the noise, in effect, destroys negative data. Various noisydata hierarchies are exhibited, which, in some cases, are known to collapse when there is no noise. Noisy behaviorally correct learning is shown to obey a very strong “subset principle”. It is shown, in many cases, how much power is needed to overcome the effects of noise. For example, the best we can do to simulate, in the presence of noise, the noisefree, no mind change cases takes infinitely many mind changes. One technical result is proved by a priority argument.
On Theory Revision with Queries
 In Proc. 12th Annu. Conf. on Comput. Learning Theory
, 1999
"... The theory revision, or concept revision, problem is to correct a given, roughly correct concept. Given the representation of an initial concept, one would like to obtain a representation of the target concept by applying revisions, that is, syntactic modifications such as the deletion of a var ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
The theory revision, or concept revision, problem is to correct a given, roughly correct concept. Given the representation of an initial concept, one would like to obtain a representation of the target concept by applying revisions, that is, syntactic modifications such as the deletion of a variable or a term. We give efficient revision algorithms using membership and equivalence queries for 2term monotone DNF, monotone kDNF, and readonce formulas. An example is given showing that some monotone DNF formulas cannot be revised efficiently. These results all assume that the revisions allowed are the replacements of a variable occurrence with a constant, which, for DNFs, corresponds to deletions of variables and terms. We also discuss a more general error model where besides deletions, additions are also allowed. 1 INTRODUCTION What the computational learning theory community calls a concept is often referred to as a theory elsewhere in artificial intelligence and logic....
Trees and Learning
 Proceedings of the Ninth Conference on Computational Learning Theory (COLT) ACMPress
, 1996
"... We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of i ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of isolated branches on a single tree. We discuss learning with additional information where the learner receives an index for a strongly recursive tree such that the function to be learned is isolated on this tree. We show that EXlearning with this type of additional information is strictly more powerful than EXlearning. 1 Introduction Inductive inference [1, 2, 4, 6, 10] deals with learning classes of recursive functions in the limit under certain convergence constraints. The most general setting is that of behaviorally correct learning (BC): for each prefix f(0)f(1) : : : f(n) of the recursive function f , the learner guesses a program for f ; the learner succeeds if Universit¨at Heidel...
Structural Measures for Games and Process Control in the Branch Learning Model
 Proceedings of the Third European Conference on Computational Learning Theory, volume 1208 of LNAI
, 1997
"... Process control problems can be modeled as closed recursive games. Learning strategies for such games is equivalent to the concept of learning infinite recursive branches for recursive trees. We use this branch learning model to measure the difficulty of learning and synthesizing process controllers ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Process control problems can be modeled as closed recursive games. Learning strategies for such games is equivalent to the concept of learning infinite recursive branches for recursive trees. We use this branch learning model to measure the difficulty of learning and synthesizing process controllers. We also measure the difference between several process learning criteria, and their difference to controller synthesis. As measure we use the information content (i.e. the Turing degree) of the oracle which a machine need to get the desired power. The investigated learning criteria are finite, EX , BC , Weak BC  and online learning. Finite, EX  and BC style learning are well known from inductive inference, while weak BC  and online learning came up with the new notion of branch (i.e. process) learning. For all considered criteria  including synthesis  we also solve the questions of their trivial degrees, their omniscient degrees and with some restrictions their inference degree...
Learning to Win ProcessControl Games Watching GameMasters
 Information and Computation
, 2002
"... . The present paper focuses on some interesting classes of processcontrol games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winningstrategy. In this paper we investigate situations, in which even a complete model ( ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
. The present paper focuses on some interesting classes of processcontrol games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winningstrategy. In this paper we investigate situations, in which even a complete model (given by a program) of a particular game does not provide enough information to synthesize  even in the limit  a winning strategy. However, if in addition to getting a program, a machine may also watch masters play winning strategies, then the machine is able to learn in the limit a winning strategy for the given game. Studied are successful learning from arbitrary masters and from pedagogically useful selected masters. It is shown that selected masters are strictly more helpful for learning than are arbitrary masters. Both for learning from arbitrary masters and for learning from selected masters, though, there are cases where one can learn programs for winning strategies from master...
The Power of Frequency Computation (Extended Abstract)
 In: Proceedings FCT'95, Lecture Notes in Computer Science
, 1995
"... ) Martin Kummer and Frank Stephan ? Universitat Karlsruhe, Institut fur Logik, Komplexitat und Deduktionssysteme, D76128 Karlsruhe, Germany. fkummer; fstephang@ira.uka.de Abstract. The notion of frequency computation concerns approximative computations of n distinct parallel queries to a set A. A ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
) Martin Kummer and Frank Stephan ? Universitat Karlsruhe, Institut fur Logik, Komplexitat und Deduktionssysteme, D76128 Karlsruhe, Germany. fkummer; fstephang@ira.uka.de Abstract. The notion of frequency computation concerns approximative computations of n distinct parallel queries to a set A. A is called (m; n)recursive if there is an algorithm which answers any n distinct parallel queries to A such that at least m answers are correct. This paper gives natural combinatorial characterizations of the fundamental inclusion problem, namely the question for which choices of the parameters m; n; m 0 ; n 0 , every (m;n)recursive set is (m 0 ; n 0 )recursive. We also characterize the inclusion problem restricted to recursively enumerable sets and the inclusion problem for the polynomialtime bounded version of frequency computation. Furthermore, using these characterizations we obtain many explicit inclusions and noninclusions. 1 Introduction Frequency computation is a classic...