Results 1 - 10
of
12
Ignoring Data May be the Only Way to Learn Efficiently
, 1994
"... In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to t ..."
Abstract
-
Cited by 23 (13 self)
- Add to MetaCart
In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to ignore data.
Robust Learning - Rich and Poor
- Journal of Computer and System Sciences
, 2000
"... A class C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes \Theta(C) where \Theta is any general recursive operator, are learnable in the sense I. It was already shown before, see ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
A class C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes \Theta(C) where \Theta is any general recursive operator, are learnable in the sense I. It was already shown before, see [Ful90, JSW98], that for I = Ex (learning in the limit) robust learning is rich in that there are classes being both not contained in any recursively enumerable class of recursive functions and, nevertheless, robustly learnable. For several criteria I, the present paper makes much more precise where we can hope for robustly learnable classes and where we cannot. This is achieved in two ways. First, for I = Ex, it is shown that only consistently learnable classes can be uniformly robustly learnable. Second, some other learning types I are classified as to whether or not they contain rich robustly learnable classes. Moreover, the first results on separating robust learning from unifor...
On the Synthesis of Strategies Identifying Recursive Functions
- Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Artificial Intelligence 2111
, 2001
"... Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. For that purpose the program reads a description of an identification problem and is supposed to construct a technique for solving the particular problem. As can be proved, uniform solvability of collections of solvable identification problems is rather influenced by the description of the problems than by the particular problems themselves. When prescribing a specific inference criterion (for example learning in the limit), a clever choice of descriptions allows uniform solvability of all solvable problems, whereas even the most simple classes of recursive functions are not uniformly learnable without restricting the set of possible descriptions. Furthermore the influence of the hypothesis spaces on uniform learnability is analysed. 1
Learning Recursive Functions: A Survey
, 2008
"... Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the m ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the mode of convergence, the requirements intermediate hypotheses have to fulfill, the set of allowed learning strategies, the source of information available to the learner during the learning process, the set of admissible hypothesis spaces, and the learning goals. A considerable amount of work done in this field has been devoted to the characterization of function classes that can be learned in a given model, the influence of natural, intuitive postulates on the resulting learning power, the incorporation of randomness into the learning process, the complexity of learning, among others. On the occasion of Rolf Wiehagen’s 60th birthday, the last four decades of research in that area are surveyed, with a special focus on Rolf Wiehagen’s work, which has made him one of the most influential scientists in the theory of learning recursive functions.
Consistency Conditions for Inductive Inference of Recursive Functions
"... Abstract. A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent lear ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract. A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem. Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a so-called δ–delay relaxing the consistency demand to all but the last δ data. Additionally, we introduce the notion of coherent learning (again with δ–delay) requiring the learner to correctly reflect only the last datum (only the n − δth datum) seen. Our results are threefold. First, it is shown that all models of coherent learning with δ–delay are exactly as powerful as their corresponding consistent learning models with δ–delay. Second, we provide characterizations for consistent learning with δ–delay in terms of complexity. Finally, we establish strict hierarchies for all consistent learning models with δ–delay in dependence on δ. 1
On Uniform Learning of Classes of Recursive Functions
, 2000
"... A classical learning problem in inductive inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. For that ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
A classical learning problem in inductive inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. For that purpose the program reads a description of an identification problem and is supposed to construct a technique for solving the particular problem. As can be proved, uniform solvability of collections of solvable identification problems is rather influenced by the description of the problems than by the particular problems themselves. When prescribing a specific inference criterion (for example learning in the limit), a clever choice of descriptions allows uniform solvability of all solvable problems, whereas even the most simple classes of recursive functions are not learnable uniformly without restricting the set of possible descriptions. Furthermore the influence of the hypothesis spaces on uniform learnability is analysed.
Fax: +81-011-706-7684On the Amount of Nonconstructivity in the Inductive Inference of Recursive Functions
, 2010
"... Nonconstructive proofs are a powerful mechanism in mathematics. Furthermore, nonconstructive computations by various types of machines and automata have been considered by e.g., Karp and Lipton [15] and Freivalds [10]. They allow to regard more complicated algorithms from the viewpoint of much more ..."
Abstract
- Add to MetaCart
(Show Context)
Nonconstructive proofs are a powerful mechanism in mathematics. Furthermore, nonconstructive computations by various types of machines and automata have been considered by e.g., Karp and Lipton [15] and Freivalds [10]. They allow to regard more complicated algorithms from the viewpoint of much more primitive computational devices. The amount of nonconstructivity is a quantitative characterization of the distance between types of computational devices with respect to solving a specific problem. In the present paper, the amount of nonconstructivity in learning of recursive functions is studied. Different learning types are compared with respect to the amount of nonconstructivity needed to learn the whole class of general recursive functions. Upper and lower bounds for the amount of nonconstructivity needed are proved.
Machine Learning of Higher Order Programs ∗
, 2007
"... A generator program for a computable function (by definition) generates an infinite sequence of programs all but finitely many of which compute that function. Machine learning of generator programs for computable functions is studied. To partially motivate these studies, it is shown that, in some ca ..."
Abstract
- Add to MetaCart
A generator program for a computable function (by definition) generates an infinite sequence of programs all but finitely many of which compute that function. Machine learning of generator programs for computable functions is studied. To partially motivate these studies, it is shown that, in some cases, interesting global properties for computable functions can be proved from suitable generator programs which can not be proved from any ordinary programs for them. The power (for variants of various learning criteria from the literature) of learning generator programs is compared with the power of learning ordinary programs. The learning power in these cases is also compared to that of learning limiting programs, i.e., programs allowed finitely many mind changes about their correct outputs.
Identification Criteria in Uniform Inductive Inference
"... Uniform Inductive Inference is concerned with the existence and the learning behaviour of strategies identifying infinitely many classes of recursive functions. The success of such strategies depends on the hypothesis spaces they use, as well as on the chosen identification criteria resulting from a ..."
Abstract
- Add to MetaCart
Uniform Inductive Inference is concerned with the existence and the learning behaviour of strategies identifying infinitely many classes of recursive functions. The success of such strategies depends on the hypothesis spaces they use, as well as on the chosen identification criteria resulting from additional demands in the basic learning model. These identification criteria correspond to different hierarchies of learning power – depending on the choice of hypothesis spaces. In most cases finite classes of recursive functions are sufficient to expose an increase in the learning power given by the uniform learning models corresponding to a pair of identification