Results 1  10
of
12
Unlearning Helps
, 2000
"... . Overregularization seen in child language learning, re verb tense constructs, involves abandoning correct behaviors for incorrect ones and later reverting to correct behaviors. Quite a number of other child development phenomena also follow this Ushaped form of learning, unlearning, and relea ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
. Overregularization seen in child language learning, re verb tense constructs, involves abandoning correct behaviors for incorrect ones and later reverting to correct behaviors. Quite a number of other child development phenomena also follow this Ushaped form of learning, unlearning, and relearning. A decisive learner doesn't do this and, in general, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H. The present paper shows that decisiveness is a real restriction on Gold's model of iteratively (or in the limit) learning of grammars for languages from positive data. This suggests that natural Ushaped learning curves may not be a mere accident of evolutionary genetic algorithms, but may be necessary for learning. The result also solves an open problem. Secondtime decisive learners conjecture each of their hypotheses for a language at most twice. By contrast, they are shown not to restrict Gold's model of lea...
Robust Learning  Rich and Poor
 Journal of Computer and System Sciences
, 2000
"... A class C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes \Theta(C) where \Theta is any general recursive operator, are learnable in the sense I. It was already shown before, see ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
A class C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes \Theta(C) where \Theta is any general recursive operator, are learnable in the sense I. It was already shown before, see [Ful90, JSW98], that for I = Ex (learning in the limit) robust learning is rich in that there are classes being both not contained in any recursively enumerable class of recursive functions and, nevertheless, robustly learnable. For several criteria I, the present paper makes much more precise where we can hope for robustly learnable classes and where we cannot. This is achieved in two ways. First, for I = Ex, it is shown that only consistently learnable classes can be uniformly robustly learnable. Second, some other learning types I are classified as to whether or not they contain rich robustly learnable classes. Moreover, the first results on separating robust learning from unifor...
Consistent Identification in the Limit of Rigid Grammars from Strings is NPhard
 Strings Is NPhard », ICGI’02, LNAI 2484
, 2002
"... In [Bus87] and [BP90] some `discovery procedures' for classical categorial grammars were defined. These procedures take a set of structures (strings labeled with derivational information) as input and yield a set of hypotheses in the form of grammars. ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In [Bus87] and [BP90] some `discovery procedures' for classical categorial grammars were defined. These procedures take a set of structures (strings labeled with derivational information) as input and yield a set of hypotheses in the form of grammars.
Developments from enquiries into the learnability of the pattern languages from positive data
, 2009
"... ..."
On the Synthesis of Strategies Identifying Recursive Functions
 Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Artificial Intelligence 2111
, 2001
"... Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. For that purpose the program reads a description of an identification problem and is supposed to construct a technique for solving the particular problem. As can be proved, uniform solvability of collections of solvable identification problems is rather influenced by the description of the problems than by the particular problems themselves. When prescribing a specific inference criterion (for example learning in the limit), a clever choice of descriptions allows uniform solvability of all solvable problems, whereas even the most simple classes of recursive functions are not uniformly learnable without restricting the set of possible descriptions. Furthermore the influence of the hypothesis spaces on uniform learnability is analysed. 1
Classes with easily learnable subclasses
 In Algorithmic Learning Theory: Thirteenth International Conference (ALT 2002
, 2002
"... ..."
(Show Context)
Learning correction grammars
 Proceedings of the 20th Annual Conference on Learning Theory, 2007. To appear. 10 For u > 1, u; b 2 N, the
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
Mind Change Speedup for Learning Languages from Positive Data ∗
"... Within the frameworks of learning in the limit of indexed classes of recursive languages from positive data and automatic learning in the limit of indexed classes of regular languages (with automatically computable sets of indices), we study the problem of minimizing the maximum number of mind chang ..."
Abstract
 Add to MetaCart
Within the frameworks of learning in the limit of indexed classes of recursive languages from positive data and automatic learning in the limit of indexed classes of regular languages (with automatically computable sets of indices), we study the problem of minimizing the maximum number of mind changes FM(n) by a learner M on all languages with indices not exceeding n. For inductive inference of recursive languages, we establish two conditions under which FM(n) can be made smaller than any recursive unbounded nondecreasing function. We also establish how FM(n) is affected if at least one of these two conditions does not hold. In the case of automatic learning, some partial results addressing speeding up the function FM(n) are obtained.
Hypothesis Spaces for Learning
"... Abstract. In this paper we survey some results in inductive inference showing how learnability of a class of languages may depend on hypothesis space chosen. We also discuss results which consider how learnability is effected if one requires learning with respect to every suitable hypothesis space. ..."
Abstract
 Add to MetaCart
Abstract. In this paper we survey some results in inductive inference showing how learnability of a class of languages may depend on hypothesis space chosen. We also discuss results which consider how learnability is effected if one requires learning with respect to every suitable hypothesis space. Additionally, optimal hypothesis spaces, using which every learnable class is learnable, is considered. 1
Identification Criteria in Uniform Inductive Inference
"... Uniform Inductive Inference is concerned with the existence and the learning behaviour of strategies identifying infinitely many classes of recursive functions. The success of such strategies depends on the hypothesis spaces they use, as well as on the chosen identification criteria resulting from a ..."
Abstract
 Add to MetaCart
Uniform Inductive Inference is concerned with the existence and the learning behaviour of strategies identifying infinitely many classes of recursive functions. The success of such strategies depends on the hypothesis spaces they use, as well as on the chosen identification criteria resulting from additional demands in the basic learning model. These identification criteria correspond to different hierarchies of learning power – depending on the choice of hypothesis spaces. In most cases finite classes of recursive functions are sufficient to expose an increase in the learning power given by the uniform learning models corresponding to a pair of identification