Results 1  10
of
23
The Power of Vacillation in Language Learning
, 1992
"... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there ..."
Abstract

Cited by 44 (11 self)
 Add to MetaCart
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff...
Synthesizing Enumeration Techniques For Language Learning
 In Proceedings of the Ninth Annual Conference on Computational Learning Theory
, 1996
"... this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?]. ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?].
The synthesis of language learners
 Information and Computation
, 1999
"... An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family. Studied is the metaprobl ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family. Studied is the metaproblem of synthesizing from indices for r.e. classes and for indexed families of languages various kinds of languagelearners for the corresponding classes or families indexed. Many positive results, as well as some negative results, are presented regarding the existence of such synthesizers. The negative results essentially provide lower bounds for the positive results. The proofs of some of the positive results yield, as pleasant corollaries, subsetprinciple or telltale style characterizations for the learnability of the corresponding classes or families indexed. For example, the indexed families of recursive languages that can be behaviorally correctly identified from positive data are surprisingly characterized by Angluin’s (1980b) Condition 2 (the subset principle for circumventing overgeneralization). 1
Robust Learning Aided by Context
 In Proceedings of the Eleventh Annual Conference on Computational Learning Theory
, 1998
"... Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapilla ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapillai, and Wiehagen have provided mathematical justification for this phenomenon in the inductive inference framework. However, their proofs rely heavily on selfreferential coding tricks, that is, they directly code the solution of the learning problem into the context. Fulk has shown that for the Ex and Bcanomaly hierarchies, such results, which rely on selfreferential coding tricks, may not hold robustly. In this work we analyze robust versions of learning aided by context and show that  in contrast to Fulk's result above  the robust versions of This work was carried out while J. Case, S. Jain, M. Ott, and F. Stephan were visiting the School of Computer Science and Engineering at ...
Robust Learning  Rich and Poor
 Journal of Computer and System Sciences
, 2000
"... A class C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes \Theta(C) where \Theta is any general recursive operator, are learnable in the sense I. It was already shown before, see ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
A class C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes \Theta(C) where \Theta is any general recursive operator, are learnable in the sense I. It was already shown before, see [Ful90, JSW98], that for I = Ex (learning in the limit) robust learning is rich in that there are classes being both not contained in any recursively enumerable class of recursive functions and, nevertheless, robustly learnable. For several criteria I, the present paper makes much more precise where we can hope for robustly learnable classes and where we cannot. This is achieved in two ways. First, for I = Ex, it is shown that only consistently learnable classes can be uniformly robustly learnable. Second, some other learning types I are classified as to whether or not they contain rich robustly learnable classes. Moreover, the first results on separating robust learning from unifor...
Synthesizing noisetolerant language learners
 Theoretical Computer Science A
, 1997
"... An index for an r.e. class of languages (by definition) generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) generates a sequence of decision procedures defining the family. F. Stephan’s model of noisy data is employed, in which, roughly, c ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
An index for an r.e. class of languages (by definition) generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) generates a sequence of decision procedures defining the family. F. Stephan’s model of noisy data is employed, in which, roughly, correct data crops up infinitely often, and incorrect data only finitely often. Studied, then, is the synthesis from indices for r.e. classes and for indexed families of languages of various kinds of noisetolerant languagelearners for the corresponding classes or families indexed. Many positive results, as well as some negative results, are presented regarding the existence of such synthesizers. The proofs of most of the positive results yield, as pleasant corollaries, strict subsetprinciple or telltale style characterizations for the noisetolerant learnability of the corresponding classes or families indexed. 1
Synthesizing Learners Tolerating Computable Noisy Data
 In Proc. 9th International Workshop on Algorithmic Learning Theory, Lecture
, 1998
"... An index for an r.e. class of languages (by definition) generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) generates a sequence of decision procedures defining the family. F. Stephan's model of noisy data is employed, in which, rough ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
An index for an r.e. class of languages (by definition) generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) generates a sequence of decision procedures defining the family. F. Stephan's model of noisy data is employed, in which, roughly, correct data crops up infinitely often, and incorrect data only finitely often. In a completely computable universe, all data sequences, even noisy ones, are computable. New to the present paper is the restriction that noisy data sequences be, nonetheless, computable! Studied, then, is the synthesis from indices for r.e. classes and for indexed families of languages of various kinds of noisetolerant languagelearners for the corresponding classes or families indexed, where the noisy input data sequences are restricted to being computable. Many positive results, as well as some negative results, are presented regarding the existence of such synthesizers. The main positive result is surpris...
On the uniform learnability of approximations to nonrecursive functions
 Algorithmic Learning Theory: Tenth International Conference (ALT 1999), volume 1720 of Lecture Notes in Artificial Intelligence
, 1999
"... Abstract. Blum and Blum (1975) showed that a class B of suitable recursive approximations to the halting problem is reliably EXlearnable. These investigations are carried on by showing that B is neither in NUM nor robustly EXlearnable. Since the definition of the class B is quite natural and does ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract. Blum and Blum (1975) showed that a class B of suitable recursive approximations to the halting problem is reliably EXlearnable. These investigations are carried on by showing that B is neither in NUM nor robustly EXlearnable. Since the definition of the class B is quite natural and does not contain any selfreferential coding, B serves as an example that the notion of robustness for learning is quite more restrictive than intended. Moreover, variants of this problem obtained by approximating any given recursively enumerable set A instead of the halting problem K are studied. All corresponding function classes U(A) are still EXinferable but may fail to be reliably EXlearnable, for example if A is nonhigh and hypersimple. Additionally, it is proved that U(A) is neither in NUM nor robustly EXlearnable provided A is part of a recursively inseparable pair, A is simple but not hypersimple or A is neither recursive nor high. These results provide more evidence that there is still some need to find an adequate notion for “naturally learnable function classes.” 1.
Transformations That Preserve Learnability
 Algorithmic Learning Theory: Seventh International Workshop (ALT ’96), volume 1160 of Lecture Notes in Artificial Intelligence
, 1996
"... . We consider transformations (performed by general recursive operators) mapping recursive functions into recursive functions. These transformations can be considered as mapping sets of recursive functions into sets of recursive functions. A transformation is said to be preserving the identicati ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
. We consider transformations (performed by general recursive operators) mapping recursive functions into recursive functions. These transformations can be considered as mapping sets of recursive functions into sets of recursive functions. A transformation is said to be preserving the identication type I, if the transformation always maps Iidentiable sets into Iidentiable sets. There are transformations preserving FIN but not EX, and there are transformations preserving EX but not FIN. However, transformations preserving EX i always preserve EX j for j < i. 1 Introduction In his academic lecture (1872) before getting professorship in Erlangen university Felix Klein (18491925) designed an astonishing program how to remake geometry. The listeners were confused and even shocked. In this program (nowadays known as Erlangen program) geometry was considered as \what remains invariant under motion transformations". It seemed unbelievable that a geometry textbook could have no ...