Results 1 
9 of
9
On the Impact of Forgetting on Learning Machines
 Journal of the ACM
, 1993
"... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540.
On the Synthesis of Strategies Identifying Recursive Functions
 Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Artificial Intelligence 2111
, 2001
"... Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. For that purpose the program reads a description of an identification problem and is supposed to construct a technique for solving the particular problem. As can be proved, uniform solvability of collections of solvable identification problems is rather influenced by the description of the problems than by the particular problems themselves. When prescribing a specific inference criterion (for example learning in the limit), a clever choice of descriptions allows uniform solvability of all solvable problems, whereas even the most simple classes of recursive functions are not uniformly learnable without restricting the set of possible descriptions. Furthermore the influence of the hypothesis spaces on uniform learnability is analysed. 1
Consistency Conditions for Inductive Inference of Recursive Functions
"... Abstract. A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent lear ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a socalled δ–delay relaxing the consistency demand to all but the last δ data. Additionally, we introduce the notion of coherent learning (again with δ–delay) requiring the learner to correctly reflect only the last datum (only the n − δth datum) seen. Our results are threefold. First, it is shown that all models of coherent learning with δ–delay are exactly as powerful as their corresponding consistent learning models with δ–delay. Second, we provide characterizations for consistent learning with δ–delay in terms of complexity. Finally, we establish strict hierarchies for all consistent learning models with δ–delay in dependence on δ. 1
Prudence in vacillatory language identification
 Math. Systems Theory
, 1995
"... The present paper settles a question about ‘prudent ’ ‘vacillatory ’ identification of languages. Consider a scenario in which an algorithmic device M is presented with all and only the elements of a language L, and M conjectures a sequence, possibly infinite, of grammars. Three different criteria f ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The present paper settles a question about ‘prudent ’ ‘vacillatory ’ identification of languages. Consider a scenario in which an algorithmic device M is presented with all and only the elements of a language L, and M conjectures a sequence, possibly infinite, of grammars. Three different criteria for success of M on L have been extensively investigated in formal language learning theory. If M converges to a single correct grammar for L, then the criterion of success is Gold’s seminal notion of TxtExidentification. If M converges to a finite number of correct grammars for L, then the criterion of success is called TxtFexidentification. And, if M, after a finite number of incorrect guesses, outputs only correct grammars for L (possibly infinitely many distinct grammars), then the criterion of success is known as TxtBcidentification. A learning machine is said to be prudent according to a particular criterion of success just in case the only grammars it ever conjectures are for languages that it can learn according to that criterion. This notion was introduced by Osherson, Stob, and Weinstein with a view to investigate certain proposals for characterizing natural languages in linguistic theory. Fulk showed that prudence does not restrict TxtExidentification, and later Kurtz and Royer showed that prudence does not restrict TxtBcidentification. The present paper shows that prudence does not restrict TxtFexidentification. 1
Consistent and Coherent Learning . . .
, 2007
"... A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent learning are i ..."
Abstract
 Add to MetaCart
A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a socalled δ –delay relaxing the consistency demand to all but the last δ data. Additionally, we introduce the notion of coherent learning (again with δ – delay) requiring the learner to correctly reflect only the last datum (only the n − δ th datum) seen. Our results are manyfold. First, it is shown that all models of coherent learning with δ –delay are exactly as powerful as their corresponding consistent learning models with δ –delay. Second, we provide characterizations for consistent learning with δ –delay in terms of complexity and computable numberings. Finally, we establish strict hierarchies for all consistent learning models with δ –delay in dependence on δ.
Identification Criteria in Uniform Inductive Inference
"... Uniform Inductive Inference is concerned with the existence and the learning behaviour of strategies identifying infinitely many classes of recursive functions. The success of such strategies depends on the hypothesis spaces they use, as well as on the chosen identification criteria resulting from a ..."
Abstract
 Add to MetaCart
Uniform Inductive Inference is concerned with the existence and the learning behaviour of strategies identifying infinitely many classes of recursive functions. The success of such strategies depends on the hypothesis spaces they use, as well as on the chosen identification criteria resulting from additional demands in the basic learning model. These identification criteria correspond to different hierarchies of learning power – depending on the choice of hypothesis spaces. In most cases finite classes of recursive functions are sufficient to expose an increase in the learning power given by the uniform learning models corresponding to a pair of identification
Learning Recursive Functions: A Survey
, 2008
"... Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the m ..."
Abstract
 Add to MetaCart
Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the mode of convergence, the requirements intermediate hypotheses have to fulfill, the set of allowed learning strategies, the source of information available to the learner during the learning process, the set of admissible hypothesis spaces, and the learning goals. A considerable amount of work done in this field has been devoted to the characterization of function classes that can be learned in a given model, the influence of natural, intuitive postulates on the resulting learning power, the incorporation of randomness into the learning process, the complexity of learning, among others. On the occasion of Rolf Wiehagen’s 60th birthday, the last four decades of research in that area are surveyed, with a special focus on Rolf Wiehagen’s work, which has made him one of the most influential scientists in the theory of learning recursive functions.