Results 1  10
of
27
Unlearning Helps
, 2000
"... . Overregularization seen in child language learning, re verb tense constructs, involves abandoning correct behaviors for incorrect ones and later reverting to correct behaviors. Quite a number of other child development phenomena also follow this Ushaped form of learning, unlearning, and relea ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
. Overregularization seen in child language learning, re verb tense constructs, involves abandoning correct behaviors for incorrect ones and later reverting to correct behaviors. Quite a number of other child development phenomena also follow this Ushaped form of learning, unlearning, and relearning. A decisive learner doesn't do this and, in general, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H. The present paper shows that decisiveness is a real restriction on Gold's model of iteratively (or in the limit) learning of grammars for languages from positive data. This suggests that natural Ushaped learning curves may not be a mere accident of evolutionary genetic algorithms, but may be necessary for learning. The result also solves an open problem. Secondtime decisive learners conjecture each of their hypotheses for a language at most twice. By contrast, they are shown not to restrict Gold's model of lea...
Angluin's Theorem for Indexed Families of R.e. Sets and Applications
, 1996
"... We extend Angluin's (1980) theorem to characterize identifiability of indexed families of r.e. languages, as opposed to indexed families of recursive languages. We also prove some variants characterizing conservativity and two other similar restrictions, paralleling Zeugmann, Lange, and Kapur&a ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We extend Angluin's (1980) theorem to characterize identifiability of indexed families of r.e. languages, as opposed to indexed families of recursive languages. We also prove some variants characterizing conservativity and two other similar restrictions, paralleling Zeugmann, Lange, and Kapur's (1992, 1995) results for indexed families of recursive languages. 1 Introduction A significant portion of the work of recent years in the field of inductive inference of formal languages, as initiated by Gold 1967, stems from Angluin's (1980b) theorem, which characterizes when an indexed family of recursive languages is identifiable in the limit from positive data in the sense of Gold. Up until around 1980, a prevalent view had been that inductive inference from positive data is too weak to be of much theoretical interest. This misconception was due to the negative result in Gold's original paper, which says that any class of languages that contains every finite language and at least one infini...
Language learning from texts: Degrees of intrinsic complexity and their characterizations
 In: Proceedings of the 13th Annual Conference on Computational Learning Theory
, 2000
"... This paper deals with two problems: 1) what makes languages to be learnable in the limit by natural strategies of varying hardness; 2) what makes classes of languages to be the hardest ones to learn. To quantify hardness of learning, we use intrinsic complexity based on reductions between learning p ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
This paper deals with two problems: 1) what makes languages to be learnable in the limit by natural strategies of varying hardness; 2) what makes classes of languages to be the hardest ones to learn. To quantify hardness of learning, we use intrinsic complexity based on reductions between learning problems. Two types of reductions are considered: weak reductions mapping texts (representations of languages) to texts, and strong reductions mapping languages to languages. For both types of reductions, characterizations of complete (hardest) classes in terms of their algorithmic and topological potentials have been obtained. To characterize the strong complete degree, we discovered a new and natural complete class capable of “coding ” any learning problem using density of the set of rational numbers. We have also discovered and characterized rich hierarchies of degrees of complexity based on “core ” natural learning problems. The classes in these hierarchies contain “multidimensional ” languages, where the information learned from one dimension aids to learn other dimensions. In one formalization of this idea, the grammars learned from the dimensions 1, 2,..., k specify the “subspace ” for the dimension k + 1, while the learning strategy for every dimension is predefined. In our other formalization, a “pattern ” learned from the dimension k specifies the learning strategy for the dimension k + 1. A number of open problems is discussed. 3 1
Results on MemoryLimited UShaped Learning
"... Abstract. Ushaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of pasttenses of English verbs, has been widely discussed among psychol ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. Ushaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of pasttenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the nonmonotonicity of learning. Previous theory literature has studied whether or not Ushaped learning, in the context of Gold’s formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, the question of the necessity of Ushaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of Ushaped learning in this memorylimited setting depends on delicate tradeoffs between the learner’s ability to remember its own previous conjecture, to store some values in its longterm memory, to make queries about whether or not items occur in previously seen data and on the learner’s choice of hypotheses space. 1
Non UShaped Vacillatory and Team Learning
, 2008
"... Ushaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is most ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Ushaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mostly concerned with whether Ushaped learning behaviour may be necessary in the abstract mathematical setting of inductive inference, that is, in the computational learning theory following the framework of Gold. All notions considered are learning from text, that is, from positive data. Previous work showed that Ushaped learning behaviour is necessary for behaviourally correct learning but not for syntactically convergent, learning in the limit ( = explanatory learning). The present paper establishes the necessity for the hierarchy of classes of vacillatory learning where a behaviourally correct learner has to satisfy the additional constraint that it vacillates in the limit between at most b grammars, where b ∈ {2, 3,...,∗}. Non Ushaped vacillatory learning is shown to be restrictive: every non Ushaped vacillatorily learnable class is already learnable in the limit. Furthermore, if vacillatory learning with the parameter b = 2 is possible then non Ushaped behaviourally correct learning is also possible. But for b = 3, surprisingly, there is a class witnessing that this implication fails.
Variants of Iterative Learning
, 1998
"... We investigate the principal learning capabilities of iterative learners in some more details. Thereby, we confine ourselves to study the learnability of indexable concept classes. The general scenario of iterative learning is as follows. An iterative learner successively takes as input one element ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We investigate the principal learning capabilities of iterative learners in some more details. Thereby, we confine ourselves to study the learnability of indexable concept classes. The general scenario of iterative learning is as follows. An iterative learner successively takes as input one element of a text (an informant) for a target concept as well as its previously made hypothesis and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the target concept.
Incremental learning with temporary memory
 THEORETICAL COMPUTER SCIENCE
, 2010
"... In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. More specifically, the model requires that, if a learner commits an example x to memory in some stage of the learning process, then there is some subsequent stage for which x no longer appears in the learner’s memory. This model is called temporary example memory (T em) learning. In some sense, it captures the idea that memories fade. Many interesting results concerning the T emlearning model are presented. For example, there exists a class of languages that can be identified by memorizing k + 1 examples in the T em sense, but that cannot be identified by memorizing k examples in the Bem sense. On the other hand, there exists a class of languages that can be identified by memorizing just 1 example in the Bem sense, but that cannot be identified by memorizing any number of examples in the T em sense. (The proof of this latter result involves an infinitary selfreference argument.) Results are also presented concerning the special cases of: learning indexable classes of languages, and learning (arbitrary) classes of infinite languages.
Resource Bounded Next Value and Explanatory Identification: Learning Automata, Patterns and Polynomials OnLine
"... This paper considers learning via predicting the next value  this concept is also known as "online learning" or "forecasting". The concept is combined with the limited memory model and has two variants: Exact NVlearning has a polynomial resource bound depending on the sizes of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper considers learning via predicting the next value  this concept is also known as "online learning" or "forecasting". The concept is combined with the limited memory model and has two variants: Exact NVlearning has a polynomial resource bound depending on the sizes of current input and the concept on long term memory and on working space (or time); in addition the number of errors is limited by a polynomial in the concept size. Independent NVlearning has polynomial resource bounds depending on the size of the current input only on long term memory and on working space (time). The following is shown: A class of functions is independently NVlearnable iff it is uniformly computable in PSPACE. Exact NVlearning is a proper restriction of independent NVlearning. For the wellknown classes of pattern languages, regular languages and polynomials, it is investigated under which variations of the resource bounds they are learnable or not learnable. Also an explanatory version of ...