Results 1 
9 of
9
Turing degrees and the Ershov hierarchy
 in Proceedings of the Tenth Asian Logic Conference, Kobe, Japan, 16 September 2008, World Scienti…c
"... Abstract. An nr.e. set can be defined as the symmetric difference of n recursively enumerable sets. The classes of these sets form a natural hierarchy which became a wellstudied topic in recursion theory. In a series of groundbreaking papers, Ershov generalized this hierarchy to transfinite level ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. An nr.e. set can be defined as the symmetric difference of n recursively enumerable sets. The classes of these sets form a natural hierarchy which became a wellstudied topic in recursion theory. In a series of groundbreaking papers, Ershov generalized this hierarchy to transfinite levels based on Kleene’s notations of ordinals and this work lead to a fruitful study of these sets and their manyone and Turing degrees. The Ershov hierarchy is a natural measure of complexity of the sets below the halting problem. In this paper, we survey the early work by Ershov and others on this hierarchy and present the most fundamental results. We also provide some pointers to concurrent work in the field. 1.
Parsimony hierarchies for inductive inference
 Journal of Symbolic Logic
"... Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and “nearly” minimal size, i.e, within a computable function of being purely minimal size. Kinber showed that this parsimony requ ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and “nearly” minimal size, i.e, within a computable function of being purely minimal size. Kinber showed that this parsimony requirement on final programs limits learning power. However, in scientific inference, parsimony is considered highly desirable. A limcomputable function is (by definition) one calculable by a total procedure allowed to change its mind finitely many times about its output. Investigated is the possibility of assuaging somewhat the limitation on learning power resulting from requiring parsimonious final programs by use of criteria which require the final, correct programs to be “notsonearly ” minimal size, e.g., to be within a limcomputable function of actual minimal size. It is shown that some parsimony in the final program is thereby retained, yet learning power strictly increases. Considered, then, are limcomputable functions as above but for which notations for constructive ordinals are used to bound the number of mind changes allowed regarding the output. This is a variant of an idea introduced by Freivalds and Smith. For this ordinal notation complexity bounded version of limcomputability, the power of
Counting Extensional Differences in BCLearning
 PROCEEDINGS OF THE 5TH INTERNATIONAL COLLOQUIUM ON GRAMMATICAL INFERENCE (ICGI 2000), SPRINGER LECTURE NOTES IN A. I. 1891
, 2000
"... Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from the literature and discuss confidence, team learning, and finitely defective hypotheses. Among other things, we prove that there is a tradeoff between the number of semantic mind changes and the number of anomalies in the hypotheses. We also discuss consequences for language learning. In particular we show that, in contrast to the case of function learning, the family of classes that are confidently BClearnable from text is not closed under finite unions. Keywords. Models of grammar induction, inductive inference, behaviourally correct learning.
Counting Extensional Differences in BCLearning
"... Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BC n to models from the lit ..."
Abstract
 Add to MetaCart
Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BC n to models from the literature and discuss confidence, team learning, and finitely defective hypotheses. Among other things, we prove that there is a tradeoff between the number of semantic mind changes and the number of anomalies in the hypotheses. We also discuss consequences for language learning. In particular we show that, in contrast to the case of function learning, the family of classes that are confidently BClearnable from text is not closed under finite unions.
Counting Extensional Differences in BCLearning \Lambda
"... University of Heidelberg Sebastiaan A. Terwijn x Vrije Universiteit Amsterdam ..."
Abstract
 Add to MetaCart
University of Heidelberg Sebastiaan A. Terwijn x Vrije Universiteit Amsterdam
Abstract
"... Alice and Bob want to know if two strings of length n are almost equal. That is, do they differ on at most a bits? Let 0 ≤ a ≤ n − 1. We show that any deterministic protocol, as well as any errorfree quantum protocol (C ∗ version), for this problem requires at least n − 2 bits of communication. We ..."
Abstract
 Add to MetaCart
Alice and Bob want to know if two strings of length n are almost equal. That is, do they differ on at most a bits? Let 0 ≤ a ≤ n − 1. We show that any deterministic protocol, as well as any errorfree quantum protocol (C ∗ version), for this problem requires at least n − 2 bits of communication. We show the same bounds for the problem of determining if two strings differ in exactly a bits. We also prove a lower bound of n/2 − 1 for errorfree Q ∗ quantum protocols. Our results are obtained by lowerbounding the ranks of the appropriate matrices. 1
Unifying Logic, Topology and Learning in Parametric Logic
"... Many connections have been established between learning and logic, or learning and topology, or logic and topology. Still, the connections are not at the heart of these fields. Each of them is fairly independent of the others when attention is restricted to basic notions and main results. We show th ..."
Abstract
 Add to MetaCart
(Show Context)
Many connections have been established between learning and logic, or learning and topology, or logic and topology. Still, the connections are not at the heart of these fields. Each of them is fairly independent of the others when attention is restricted to basic notions and main results. We show that connections can actually be made at a fundamental level, and result in a logic with parameters that needs topological notions for its early developments, and notions from learning theory for interpretation and applicability. One of the key properties...
Unifying Logic, Topology and Learning in Parametric Logic
"... 1 Introduction There is an immediate similarity between the compactness theorem of firstorderlogic and finite learning. Indeed, let ..."
Abstract
 Add to MetaCart
(Show Context)
1 Introduction There is an immediate similarity between the compactness theorem of firstorderlogic and finite learning. Indeed, let
Counting Extensional Differences in BCLearning
"... Let BC be the model of behaviourally correct function learning as introduced by Bārzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from the lit ..."
Abstract
 Add to MetaCart
Let BC be the model of behaviourally correct function learning as introduced by Bārzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from the literature and discuss confidence, team learning, and finitely defective hypotheses. Among other things, we prove that there is a tradeoff between the number of semantic mind changes and the number of anomalies in the hypotheses. We also discuss consequences for language learning. In particular we show that, in contrast to the case of function learning, the family of classes that are confidently BClearnable from text is not closed under finite unions.