Results 1 
7 of
7
On the intrinsic complexity of learning recursive functions
 In Proceedings of the Twelfth Annual Conference on Computational Learning Theory
, 1999
"... The intrinsic complexity of learning compares the difficulty of learning classes of objects by using some reducibility notion. For several types of learning recursive functions, both natural complete classes are exhibited and necessary and sufficient conditions for completeness are derived. Informal ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The intrinsic complexity of learning compares the difficulty of learning classes of objects by using some reducibility notion. For several types of learning recursive functions, both natural complete classes are exhibited and necessary and sufficient conditions for completeness are derived. Informally, a class is complete iff both its topological structure is highly complex while its algorithmic structure is easy. Some selfdescribing classes turn out to be complete. Furthermore, the structure of the intrinsic complexity is shown to be much richer than the structure of the mind change complexity, though in general, intrinsic complexity and mind change complexity can behave “orthogonally”. 1.
On OneSided Versus TwoSided Classification
 Forschungsberichte Mathematische Logik 25 / 1996, Mathematisches Institut, Universit at
, 1996
"... Onesided classifiers are computable devices which read the characteristic function of a set and output a sequence of guesses which converges to 1 iff the set on the input belongs to the given class. Such a classifier is twosided if the sequence of its output in addition converges to 0 on sets not ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Onesided classifiers are computable devices which read the characteristic function of a set and output a sequence of guesses which converges to 1 iff the set on the input belongs to the given class. Such a classifier is twosided if the sequence of its output in addition converges to 0 on sets not belonging to the class. The present work obtains the below mentioned results for onesided classes (= \Sigma 0 2 classes) w.r.t. four areas: Turing complexity, 1reductions, index sets and measure. There are onesided classes which are not twosided. This can have two reasons: (1) the class has only high Turing complexity. Then there are some oracles which allow to construct noncomputable twosided classifiers. (2) The class is difficult because of some topological constraints and then there are also no nonrecursive twosided classifiers. For case (1), several results are obtained to localize the Turing complexity of certain types of onesided sets. The concepts of 1reduction, 1completene...
On Learning of Functions Refutably
, 2007
"... Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult ” to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general. For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria.
Data
"... One focus of inductive inference is to infer a program for a function f from observations or queries about f. We propose a new line of research which examines the question of inferring the answers to queries. For a given class of computable functions, we consider the learning (in the limit) of prope ..."
Abstract
 Add to MetaCart
(Show Context)
One focus of inductive inference is to infer a program for a function f from observations or queries about f. We propose a new line of research which examines the question of inferring the answers to queries. For a given class of computable functions, we consider the learning (in the limit) of properties of these functions that can be captured by queries formulated in a logical language L. We study the inference types that arise in this context. Of particular interest is a comparison between the learning of properties and the learning of programs. Our results suggest that these two types of learning are incomparable. In addition, our techniques can be used to prove a general lemma about query inference [19]. We show that I ⊂ J ⇒ QI(L) ⊂ QJ (L) for many standard inference types I, J and many query languages L. Hence any separation that holds between these inference types also holds between the corresponding query inference types. One interesting consequence is that [24, 49]QEX 0 ([Succ, <] 2) − [2, 4]QEX 0 ([Succ, <] 2) � = ∅.
Refutable Inductive Inference of Recursive Functions
, 2001
"... Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that ..."
Abstract
 Add to MetaCart
(Show Context)
Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are "most difficult" to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general. For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria.
Learning Recursive Functions Refutably?
"... Abstract. Learning of recursive functions refutably means that for every recursive function, the learning machine has either to learn this function or to refute it, i.e., to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Learning of recursive functions refutably means that for every recursive function, the learning machine has either to learn this function or to refute it, i.e., to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. All these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult ” to learn, while the third one does. Moreover, we present characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general. For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. Then we derive hierarchies for refutable learning. Finally, we show that stricter refutability constraints cannot be traded for more liberal learning criteria. 1.