Results 1  10
of
32
Training Sequences
"... this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence that learning how to learn is viable for computers and this paper proves that doing so is the only way possible for computers to make certain inferences.
Robust Learning  Rich and Poor
 Journal of Computer and System Sciences
, 2000
"... A class C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes \Theta(C) where \Theta is any general recursive operator, are learnable in the sense I. It was already shown before, see ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
A class C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes \Theta(C) where \Theta is any general recursive operator, are learnable in the sense I. It was already shown before, see [Ful90, JSW98], that for I = Ex (learning in the limit) robust learning is rich in that there are classes being both not contained in any recursively enumerable class of recursive functions and, nevertheless, robustly learnable. For several criteria I, the present paper makes much more precise where we can hope for robustly learnable classes and where we cannot. This is achieved in two ways. First, for I = Ex, it is shown that only consistently learnable classes can be uniformly robustly learnable. Second, some other learning types I are classified as to whether or not they contain rich robustly learnable classes. Moreover, the first results on separating robust learning from unifor...
On the Classification of Computable Languages
, 1997
"... A onesided classifier for a given class of languages converges to 1 on every language from the class and outputs 0 innitely often on languages outside the class. A twosided classifier, on the other hand, converges to 1 on languages from the class and converges to 0 on languages outside the clas ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
A onesided classifier for a given class of languages converges to 1 on every language from the class and outputs 0 innitely often on languages outside the class. A twosided classifier, on the other hand, converges to 1 on languages from the class and converges to 0 on languages outside the class. The present paper investigates onesided and twosided classification for classes of computable languages. Theorems are presented that help assess the classifiability of natural classes. The relationships of classification to inductive learning theory and to structural complexity theory in terms of Turing degrees are studied. Furthermore, the special case of classification from only positive data is also investigated.
On the uniform learnability of approximations to nonrecursive functions
 Algorithmic Learning Theory: Tenth International Conference (ALT 1999), volume 1720 of Lecture Notes in Artificial Intelligence
, 1999
"... Abstract. Blum and Blum (1975) showed that a class B of suitable recursive approximations to the halting problem is reliably EXlearnable. These investigations are carried on by showing that B is neither in NUM nor robustly EXlearnable. Since the definition of the class B is quite natural and does ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Blum and Blum (1975) showed that a class B of suitable recursive approximations to the halting problem is reliably EXlearnable. These investigations are carried on by showing that B is neither in NUM nor robustly EXlearnable. Since the definition of the class B is quite natural and does not contain any selfreferential coding, B serves as an example that the notion of robustness for learning is quite more restrictive than intended. Moreover, variants of this problem obtained by approximating any given recursively enumerable set A instead of the halting problem K are studied. All corresponding function classes U(A) are still EXinferable but may fail to be reliably EXlearnable, for example if A is nonhigh and hypersimple. Additionally, it is proved that U(A) is neither in NUM nor robustly EXlearnable provided A is part of a recursively inseparable pair, A is simple but not hypersimple or A is neither recursive nor high. These results provide more evidence that there is still some need to find an adequate notion for “naturally learnable function classes.” 1.
Refuting Learning Revisited
 Forschungsberichte Mathematische Logik 52/2001, Mathematisches Institut, Universitat
, 2001
"... We consider, within the framework of inductive inference, the concept of refuting learning as introduced by Mukouchi and Arikawa, where the learner is not only required to learn all concepts in a given class but also has to explicitly refute concepts outside the class. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We consider, within the framework of inductive inference, the concept of refuting learning as introduced by Mukouchi and Arikawa, where the learner is not only required to learn all concepts in a given class but also has to explicitly refute concepts outside the class.
Probabilistic and Team PFINtype Learning: General Properties
"... We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p1 and p2 and answers whether PFINtype learning with the probability of success ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p1 and p2 and answers whether PFINtype learning with the probability of success p1 is equivalent to PFINtype learning with the probability of success p2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is wellordered in descending ordering and orderequivalent to ordinal ffl0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFINtype learning, team learning and probabilistic learning are of the same power.
Learning Classes of Approximations to NonRecursive Functions
 Theoret. Comput. Sci
"... Blum and Blum (1975) showed that a class B of suitable recursive approximations to the halting problem K is reliably EXlearnable but left it open whether or not B is in NUM . By showing B to be not in NUM we resolve this old problem. Moreover, variants of this problem obtained by approximating any ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Blum and Blum (1975) showed that a class B of suitable recursive approximations to the halting problem K is reliably EXlearnable but left it open whether or not B is in NUM . By showing B to be not in NUM we resolve this old problem. Moreover, variants of this problem obtained by approximating any given recursively enumerable set A instead of the halting problem K are studied. All corresponding function classes U(A) are still EXinferable but may fail to be reliably EXlearnable, for example if A is nonhigh and hypersimple. Blum and Blum (1975) considered only approximations to K defined by monotone complexity functions. We prove this condition to be necessary for making learnability independent of the underlying complexity measure. The class ~ B of all recursive approximations to K generated by all total complexity functions is shown to be not even behaviorally correct learnable for a class of natural complexity measures. On the other hand, there are complexity measures such that ~ B is EX learnable. A similar result is obtained for all classes ~ U(A). For natural complexity measures, B is shown to be not robustly learnable, but again there are complexity measures such that B and, more generally, every class U(A) is robustly EXlearnable. This result extends the criticism of Jain, Smith and Wiehagen (1998), since the classes defined by artificial complexity measures turn out to be robustly learnable while those defined by natural complexity measures are not robustly learnable. 1 Supported by the Deutsche Forschungsgemeinschaft (DFG) under Heisenberg grant no. Ste 967/11. 2 Supported by the GrantinAid for Scientific Research in Fundamental Areas from the Japanese Ministry of Education, Science, Sports, and Culture under grant no. 10558047. Part of th...
Costs of general purpose learning
 THEORETICAL COMPUTER SCIENCE
, 2007
"... Leo Harrington surprisingly constructed a machine which can learn any computable function f according to the following criterion (called Bc ∗identification). His machine, on the successive graph points of f, outputs a corresponding infinite sequence of programs p0, p1, p2,..., and, for some i, the ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Leo Harrington surprisingly constructed a machine which can learn any computable function f according to the following criterion (called Bc ∗identification). His machine, on the successive graph points of f, outputs a corresponding infinite sequence of programs p0, p1, p2,..., and, for some i, the programs pi, pi+1, pi+2,... each compute a variant of f which differs from f at only finitely many argument places. A machine with this property is called general purpose. The sequence pi, pi+1, pi+2,... is called a final sequence. For Harrington’s general purpose machine, for distinct m and n, the finitely many argument places where pi+m fails to compute f can be very different from the finitely many argument places where pi+n fails to compute f. One would hope though, that if Harrington’s machine, or an improvement thereof, inferred the program pi+m based on the data points f(0), f(1),..., f(k), then pi+m would make very few mistakes computing f at the “near future ” arguments k + 1, k + 2,..., k + ℓ, where ℓ is reasonably large. Ideally, pi+m’s finitely many mistakes or anomalies would (mostly) occur at arguments x ≫ k, i.e., ideally, its anomalies would be well placed beyond