Results 1  10
of
14
On the Intrinsic Complexity of Learning
 Information and Computation
, 1995
"... A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion capt ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
(Show Context)
A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks. 2 1 Introduction Traditional studies of inductive inference have focused on illuminating various strata of learnability based on varying the definition of learnability. The research following the Valiant's PAC model [Val84] and Angluin's teacher/learner model [Ang88] paid very careful attention to calculating the complexity of the learning algorithm. We present a new view of learning, based on the notion of reduction, that captures a different perspective on learning complexity than all prior studies. Based on our prelimanary reports, Jain...
Language Learning from Texts: Mind Changes, Limited Memory and Monotonicity (Extended Abstract)
 INFORMATION AND COMPUTATION
, 1995
"... The paper explores language learning in the limit under various constraints on the number of mindchanges, memory, and monotonicity. We define language learning with limited (long term) memory and prove that learning with limited memory is exactly the same as learning via set driven machines (when t ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
The paper explores language learning in the limit under various constraints on the number of mindchanges, memory, and monotonicity. We define language learning with limited (long term) memory and prove that learning with limited memory is exactly the same as learning via set driven machines (when the order of the input string is not taken into account). Further we show that every language learnable via a set driven machine is learnable via a conservative machine (making only justifiable mindchanges). We get a variety of separation results for learning with bounded number of mindchanges or limited memory under restrictions on monotonicity. Many separation results have a variant: If a criterion A can be separated from B, then often it is possible to find a family L of languages such that L is A and B learnable, but while it is possible to restrict the number of mindchanges or long term memory...
Results on MemoryLimited UShaped Learning
"... Abstract. Ushaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of pasttenses of English verbs, has been widely discussed among psychol ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Ushaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of pasttenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the nonmonotonicity of learning. Previous theory literature has studied whether or not Ushaped learning, in the context of Gold’s formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, the question of the necessity of Ushaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of Ushaped learning in this memorylimited setting depends on delicate tradeoffs between the learner’s ability to remember its own previous conjecture, to store some values in its longterm memory, to make queries about whether or not items occur in previously seen data and on the learner’s choice of hypotheses space. 1
A Survey of Inductive Inference with an Emphasis on Queries
 Complexity, Logic, and Recursion Theory, number 187 in Lecture notes in Pure and Applied Mathematics Series
, 1997
"... this paper M 0 ; M 1 ; : : : is a standard list of all Turing machines, M ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
this paper M 0 ; M 1 ; : : : is a standard list of all Turing machines, M
Resource Bounded Next Value and Explanatory Identification: Learning Automata, Patterns and Polynomials OnLine
"... This paper considers learning via predicting the next value  this concept is also known as "online learning" or "forecasting". The concept is combined with the limited memory model and has two variants: Exact NVlearning has a polynomial resource bound depending on the sizes of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper considers learning via predicting the next value  this concept is also known as "online learning" or "forecasting". The concept is combined with the limited memory model and has two variants: Exact NVlearning has a polynomial resource bound depending on the sizes of current input and the concept on long term memory and on working space (or time); in addition the number of errors is limited by a polynomial in the concept size. Independent NVlearning has polynomial resource bounds depending on the size of the current input only on long term memory and on working space (time). The following is shown: A class of functions is independently NVlearnable iff it is uniformly computable in PSPACE. Exact NVlearning is a proper restriction of independent NVlearning. For the wellknown classes of pattern languages, regular languages and polynomials, it is investigated under which variations of the resource bounds they are learnable or not learnable. Also an explanatory version of ...
Relations between Two Types of Memory in Inductive Inference, unpublished manuscript
"... ..."
(Show Context)
unknown title
"... www.elsevier.com/locate/tcs Experience, generations, and limits in machine learning ..."
Abstract
 Add to MetaCart
(Show Context)
www.elsevier.com/locate/tcs Experience, generations, and limits in machine learning
On Learning To Coordinate: Random Bits Help, Insightful Normal Forms, and Competency Isomorphisms
, 2007
"... A mere bounded number of random bits judiciously employed by a probabilistically correct algorithmic coordinator is shown to increase the power of learning to coordinate compared to deterministic algorithmic coordinators. Furthermore, these probabilistic algorithmic coordinators are provably not cha ..."
Abstract
 Add to MetaCart
A mere bounded number of random bits judiciously employed by a probabilistically correct algorithmic coordinator is shown to increase the power of learning to coordinate compared to deterministic algorithmic coordinators. Furthermore, these probabilistic algorithmic coordinators are provably not characterized in power by teams of deterministic ones. An insightful, enumeration technique based, normal form characterization of the classes that are learnable by total computable coordinators is given. These normal forms are for insight only since it is shown that the complexity of the normal form of a total computable coordinator can be infeasible compared to the original coordinator. Montagna and Osherson showed that the competence class of a total coordinator cannot be strictly improved by another total coordinator. It is shown in the present paper that the competencies of any two total coordinators are the same modulo isomorphism. Furthermore, a completely effective, index set version of this competency isomorphism result is given, where all the coordinators are total computable. We also investigate the competence classes of total coordinators from the points of view of topology and descriptive set theory.
Automatic Learners with Feedback Queries
"... Abstract. Automatic classes are classes of languages for which a finite automaton can decide whether a given element is in a set given by its index. The present work studies the learnability of automatic families by automatic learners which, in each round, output a hypothesis and update a long term ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Automatic classes are classes of languages for which a finite automaton can decide whether a given element is in a set given by its index. The present work studies the learnability of automatic families by automatic learners which, in each round, output a hypothesis and update a long term memory, depending on the input datum, via an automatic function, that is, via a function whose graph is recognised by a finite automaton. Many variants of automatic learners are investigated: where the long term memory is restricted to be the just prior hypothesis whenever this exists, cannot be of size larger than the size of the longest example or has to consist of a constant number of examples seen so far. Furthermore, learnability is also studied with respect to queries which reveal information about past data or past computation history; the number of queries per round is bounded by a constant. These models are generalisations of the model of feedback queries, given by Lange, Wiehagen and Zeugmann. 1
Learning and Extending Sublanguages
"... A number of natural models for learning in the limit is introduced to deal with the situation when a learner is required to provide a grammar covering the input even if only a part of the target language is available. Examples of language families are exhibited that are learnable in one model and no ..."
Abstract
 Add to MetaCart
(Show Context)
A number of natural models for learning in the limit is introduced to deal with the situation when a learner is required to provide a grammar covering the input even if only a part of the target language is available. Examples of language families are exhibited that are learnable in one model and not learnable in another one. Some characterizations for learnability of algorithmically enumerable families of languages for the models in question are obtained. Since learnability of any part of the target language does not imply monotonicity of the learning process, we consider also our models under additional monotonicity constraint. 1