Results 1 
6 of
6
Iterative Learning of Simple External Contextual Languages
"... Abstract. It is investigated for which choice of a parameter q, denoting the number of contexts, the class of simple external contextual languages is iteratively learnable. On one hand, the class admits, for all values of q, polynomial time learnability provided an adequate choice of the hypothesis ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract. It is investigated for which choice of a parameter q, denoting the number of contexts, the class of simple external contextual languages is iteratively learnable. On one hand, the class admits, for all values of q, polynomial time learnability provided an adequate choice of the hypothesis space is given. On the other hand, additional constraints like consistency and conservativeness or the use of a oneone hypothesis space changes the picture — iterative learning limits the long term memory of the learner to the current hypothesis and these constraints further hinder storage of information via padding of this hypothesis. It is shown that if q> 3, then simple external contextual languages are not iteratively learnable using a class preserving oneone hypothesis space, while for q = 1 it is iteratively learnable, even in polynomial time. It is also investigated for which choice of the parameters, the simple external contextual languages can be learnt by a consistent and conservative iterative learner. 1
Incremental learning with temporary memory
 THEORETICAL COMPUTER SCIENCE
, 2010
"... In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. More specifically, the model requires that, if a learner commits an example x to memory in some stage of the learning process, then there is some subsequent stage for which x no longer appears in the learner’s memory. This model is called temporary example memory (T em) learning. In some sense, it captures the idea that memories fade. Many interesting results concerning the T emlearning model are presented. For example, there exists a class of languages that can be identified by memorizing k + 1 examples in the T em sense, but that cannot be identified by memorizing k examples in the Bem sense. On the other hand, there exists a class of languages that can be identified by memorizing just 1 example in the Bem sense, but that cannot be identified by memorizing any number of examples in the T em sense. (The proof of this latter result involves an infinitary selfreference argument.) Results are also presented concerning the special cases of: learning indexable classes of languages, and learning (arbitrary) classes of infinite languages.
Learning with Temporary Memory (Expanded Version)
, 2008
"... In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. More specifically, the model requires that, if a learner commits an example x to memory in some stage of the learning process, then there is some subsequent stage for which x no longer appears in the learner’s memory. This model is called temporary example memory (T em) learning. In some sense, it captures the idea that memories fade. Many interesting results concerning the T emlearning model are presented. For example, there exists a class of languages that can be identified by memorizing k + 1 examples in the T em sense, but that cannot be identified by memorizing k examples in the Bem sense. On the other hand, there exists a class of languages that can be identified by memorizing just 1 example in the Bem sense, but that cannot be identified by memorizing any number of examples in the T em sense. (The proof of this latter result involves an infinitary selfreference argument.) Results are also presented concerning the special cases of: learning indexable classes of languages, and learning (arbitrary) classes of infinite languages.
Optimal Language Learning (Expanded Version)
, 2008
"... Abstract. Gold’s original paper on inductive inference introduced a notion of an optimal learner. Intuitively, a learner identifies a class of objects optimally iff there is no other learner that: requires as little of each presentation of each object in the class in order to identify that object, a ..."
Abstract
 Add to MetaCart
Abstract. Gold’s original paper on inductive inference introduced a notion of an optimal learner. Intuitively, a learner identifies a class of objects optimally iff there is no other learner that: requires as little of each presentation of each object in the class in order to identify that object, and, for some presentation of some object in the class, requires less of that presentation in order to identify that object. Wiehagen considered this notion in the context of function learning, and characterized an optimal function learner as one that is classpreserving, consistent, and (in a very strong sense) nonUshaped, with respect to the class of functions learned. Herein, Gold’s notion is considered in the context of language learning. Intuitively, a language learner identifies a class of languages optimally iff there is no other learner that: requires as little of each text for each language in the class in order to identify that language, and, for some text for some language in the class, requires less of that text in order to identify that language. Many interesting results concerning optimal language learners are presented. First, it is shown that a characterization analogous to Wiehagen’s does not hold in this setting. Specifically, optimality is not sufficient to guarantee Wiehagen’s conditions; though, those conditions are sufficient to guarantee optimality. Second, it is shown that the failure of this analog is not due to a restriction on algorithmic learning power imposed by nonUshapedness (in the strong form employed by Wiehagen). That is, nonUshapedness, even in this strong form, does not restrict algorithmic learning power. Finally, for an arbitrary optimal learner F of a class of languages L, it is shown that F optimally identifies a subclass K of L iff F is classpreserving with respect to K. 1
Learning without Coding
, 2010
"... Iterative learning is a model of language learning from positive data, due to Wiehagen. When compared to a learner in Gold’s original model of language learning from positive data, an iterative learner can be thought of as memorylimited. However, an iterative learner can memorize some input elemen ..."
Abstract
 Add to MetaCart
Iterative learning is a model of language learning from positive data, due to Wiehagen. When compared to a learner in Gold’s original model of language learning from positive data, an iterative learner can be thought of as memorylimited. However, an iterative learner can memorize some input elements by coding them into the syntax of its hypotheses. A main concern of this paper is: to what extent are such coding tricks necessary? One means of preventing some such coding tricks is to require that the hypothesis space used be free of redundancy, i.e., that it be 11. In this context, we make the following contributions. By extending a result of Lange & Zeugmann, we show that many interesting and nontrivial classes of languages can be iteratively identified using a Friedberg numbering as the hypothesis space. (Recall that a Friedberg numbering is a 11 effective numbering of all computably enumerable sets.) An example of such a class is the pattern languages over an arbitrary alphabet. On the other hand, we show that there exists a class of languages that cannot be iteratively identified using any 11 effective numbering as the hypothesis space. We also consider an iterativelike learning model in which the computational component of the learner is modeled as an enumeration operator, as opposed to a partial computable function. In this new model, there are no hypotheses, and, thus, no syntax in which the learner can encode what elements it has or has not yet seen. We show that there exists a class of languages that can be identified under this new model, but that cannot be iteratively identified. On the other hand, we show that there exists a class of languages that cannot be identified under this new model, but that can be iteratively identified using a Friedberg numbering as the hypothesis space.
Learning without Coding
"... Iterative learning is a model of language learning from positive data, due to Wiehagen. When compared to a learner in Gold’s original model of language learning from positive data, an iterative learner can be thought of as memorylimited. However, an iterative learner can memorize some input elements ..."
Abstract
 Add to MetaCart
Iterative learning is a model of language learning from positive data, due to Wiehagen. When compared to a learner in Gold’s original model of language learning from positive data, an iterative learner can be thought of as memorylimited. However, an iterative learner can memorize some input elements by coding them into the syntax of its hypotheses. A main concern of this paper is: to what extent are such coding tricks necessary? One means of preventing some such coding tricks is to require that the hypothesis space used be free of redundancy, i.e., that it be 11. In this context, we make the following contributions. By extending a result of Lange & Zeugmann, we show that many interesting and nontrivial classes of languages can be iteratively identified using a Friedberg numbering as the hypothesis space. (Recall that a Friedberg numbering is a 11 effective numbering of all computably enumerable sets.) An example of such a class is the class of pattern languages over an arbitrary alphabet. On the other hand, we show that there exists an