Results 1 - 10
of
10
On the Impact of Forgetting on Learning Machines
- Journal of the ACM
, 1993
"... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..."
Abstract
-
Cited by 15 (5 self)
- Add to MetaCart
(Show Context)
this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540.
On the Synthesis of Strategies Identifying Recursive Functions
- Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Artificial Intelligence 2111
, 2001
"... Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. For that purpose the program reads a description of an identification problem and is supposed to construct a technique for solving the particular problem. As can be proved, uniform solvability of collections of solvable identification problems is rather influenced by the description of the problems than by the particular problems themselves. When prescribing a specific inference criterion (for example learning in the limit), a clever choice of descriptions allows uniform solvability of all solvable problems, whereas even the most simple classes of recursive functions are not uniformly learnable without restricting the set of possible descriptions. Furthermore the influence of the hypothesis spaces on uniform learnability is analysed. 1
Prudence in vacillatory language identification
- Mathematical Systems Theory
, 1995
"... ..."
(Show Context)
Iterative Learning from Positive Data and Negative Counterexamples
, 2007
"... A model for learning in the limit is defined where a (so-called iterative) learner gets all positive examples from the target language, tests every new conjecture with a teacher (oracle) if it is a subset of the target language (and if it is not, then it receives a negative counterexample), and uses ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
A model for learning in the limit is defined where a (so-called iterative) learner gets all positive examples from the target language, tests every new conjecture with a teacher (oracle) if it is a subset of the target language (and if it is not, then it receives a negative counterexample), and uses only limited long-term memory (incorporated in conjectures). Three variants of this model are compared: when a learner receives least negative counterexamples, the ones whose size is bounded by the maximum size of input seen so far, and arbitrary ones. A surprising result is that sometimes absence of bounded counterexamples can help an iterative learner whereas arbitrary counterexamples are useless. We also compare our learnability model with other relevant models of learnability in the limit, study how our model works for indexed classes of recursive languages, and show that learners in our model can work in non-U-shaped way — never abandoning the first right conjecture.
Learning Recursive Functions: A Survey
, 2008
"... Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the m ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the mode of convergence, the requirements intermediate hypotheses have to fulfill, the set of allowed learning strategies, the source of information available to the learner during the learning process, the set of admissible hypothesis spaces, and the learning goals. A considerable amount of work done in this field has been devoted to the characterization of function classes that can be learned in a given model, the influence of natural, intuitive postulates on the resulting learning power, the incorporation of randomness into the learning process, the complexity of learning, among others. On the occasion of Rolf Wiehagen’s 60th birthday, the last four decades of research in that area are surveyed, with a special focus on Rolf Wiehagen’s work, which has made him one of the most influential scientists in the theory of learning recursive functions.
Consistency Conditions for Inductive Inference of Recursive Functions
"... Abstract. A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent lear ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract. A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a so-called δ–delay relaxing the consistency demand to all but the last δ data. Additionally, we introduce the notion of coherent learning (again with δ–delay) requiring the learner to correctly reflect only the last datum (only the n − δth datum) seen. Our results are threefold. First, it is shown that all models of coherent learning with δ–delay are exactly as powerful as their corresponding consistent learning models with δ–delay. Second, we provide characterizations for consistent learning with δ–delay in terms of complexity. Finally, we establish strict hierarchies for all consistent learning models with δ–delay in dependence on δ. 1
Identification Criteria in Uniform Inductive Inference
"... Uniform Inductive Inference is concerned with the existence and the learning behaviour of strategies identifying infinitely many classes of recursive functions. The success of such strategies depends on the hypothesis spaces they use, as well as on the chosen identification criteria resulting from a ..."
Abstract
- Add to MetaCart
Uniform Inductive Inference is concerned with the existence and the learning behaviour of strategies identifying infinitely many classes of recursive functions. The success of such strategies depends on the hypothesis spaces they use, as well as on the chosen identification criteria resulting from additional demands in the basic learning model. These identification criteria correspond to different hierarchies of learning power – depending on the choice of hypothesis spaces. In most cases finite classes of recursive functions are sufficient to expose an increase in the learning power given by the uniform learning models corresponding to a pair of identification
Consistent and Coherent Learning . . .
, 2007
"... A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent learning are i ..."
Abstract
- Add to MetaCart
A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a so-called δ –delay relaxing the consistency demand to all but the last δ data. Additionally, we introduce the notion of coherent learning (again with δ – delay) requiring the learner to correctly reflect only the last datum (only the n − δ th datum) seen. Our results are manyfold. First, it is shown that all models of coherent learning with δ –delay are exactly as powerful as their corresponding consistent learning models with δ –delay. Second, we provide characterizations for consistent learning with δ –delay in terms of complexity and computable numberings. Finally, we establish strict hierarchies for all consistent learning models with δ –delay in dependence on δ.