Results 1 
7 of
7
Language Learning from Texts: Mind Changes, Limited Memory and Monotonicity (Extended Abstract)
 INFORMATION AND COMPUTATION
, 1995
"... The paper explores language learning in the limit under various constraints on the number of mindchanges, memory, and monotonicity. We define language learning with limited (long term) memory and prove that learning with limited memory is exactly the same as learning via set driven machines (when t ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
The paper explores language learning in the limit under various constraints on the number of mindchanges, memory, and monotonicity. We define language learning with limited (long term) memory and prove that learning with limited memory is exactly the same as learning via set driven machines (when the order of the input string is not taken into account). Further we show that every language learnable via a set driven machine is learnable via a conservative machine (making only justifiable mindchanges). We get a variety of separation results for learning with bounded number of mindchanges or limited memory under restrictions on monotonicity. Many separation results have a variant: If a criterion A can be separated from B, then often it is possible to find a family L of languages such that L is A and B learnable, but while it is possible to restrict the number of mindchanges or long term memory...
On the Intrinsic Complexity of Learning
 Information and Computation
, 1995
"... A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion capt ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks. 2 1 Introduction Traditional studies of inductive inference have focused on illuminating various strata of learnability based on varying the definition of learnability. The research following the Valiant's PAC model [Val84] and Angluin's teacher/learner model [Ang88] paid very careful attention to calculating the complexity of the learning algorithm. We present a new view of learning, based on the notion of reduction, that captures a different perspective on learning complexity than all prior studies. Based on our prelimanary reports, Jain...
Results on MemoryLimited UShaped Learning
"... Abstract. Ushaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of pasttenses of English verbs, has been widely discussed among psychol ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. Ushaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of pasttenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the nonmonotonicity of learning. Previous theory literature has studied whether or not Ushaped learning, in the context of Gold’s formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, the question of the necessity of Ushaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of Ushaped learning in this memorylimited setting depends on delicate tradeoffs between the learner’s ability to remember its own previous conjecture, to store some values in its longterm memory, to make queries about whether or not items occur in previously seen data and on the learner’s choice of hypotheses space. 1
A Survey of Inductive Inference with an Emphasis on Queries
 Complexity, Logic, and Recursion Theory, number 187 in Lecture notes in Pure and Applied Mathematics Series
, 1997
"... this paper M 0 ; M 1 ; : : : is a standard list of all Turing machines, M ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this paper M 0 ; M 1 ; : : : is a standard list of all Turing machines, M
Relations Between Two Types of Memory in Inductive Inference
"... . We consider inductive inference with limited memory[2]. We show that there exists a set U of total recursive functions such that { U can be learned with linear longterm memory (and no shortterm memory); { U can be learned with logarithmic longterm memory (and some amount of shortterm memo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. We consider inductive inference with limited memory[2]. We show that there exists a set U of total recursive functions such that { U can be learned with linear longterm memory (and no shortterm memory); { U can be learned with logarithmic longterm memory (and some amount of shortterm memory); { if U is learned with sublinear longterm memory, then the shortterm memory exceeds arbitrary recursive function. Thus an open problem posed by Freivalds, Kinber and Smith[2] is solved. To prove our result, we use Kolmogorov complexity. 1 Introduction There are two kinds of complexity in inductive inference (and learning in general) : { the complexity of computations necessary for learning; { the complexity of learning itself; There are some complexity measures that better reect the complexity of computations and some measures that better reect the complexity of learning. Several attempts to separate these two kinds of complexity have been made. For space (memory) complexi...
On Learning To Coordinate: Random Bits Help, Insightful Normal Forms, and Competency Isomorphisms
"... A mere bounded number of random bits judiciously employed by a probabilistically correct algorithmic coordinator is shown to increase the power of learning to coordinate compared to deterministic algorithmic coordinators. Furthermore, these probabilistic algorithmic coordinators are provably not cha ..."
Abstract
 Add to MetaCart
A mere bounded number of random bits judiciously employed by a probabilistically correct algorithmic coordinator is shown to increase the power of learning to coordinate compared to deterministic algorithmic coordinators. Furthermore, these probabilistic algorithmic coordinators are provably not characterized in power by teams of deterministic ones. An insightful, enumeration technique based,...
unknown title
"... www.elsevier.com/locate/tcs Experience, generations, and limits in machine learning ..."
Abstract
 Add to MetaCart
www.elsevier.com/locate/tcs Experience, generations, and limits in machine learning