Results 1  10
of
10
Lowness Properties and Randomness
 ADVANCES IN MATHEMATICS
"... The set A is low for MartinLof random if each random set is already random relative to A. A is Ktrivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of AmbosSpies and Kucera [2 ..."
Abstract

Cited by 101 (25 self)
 Add to MetaCart
The set A is low for MartinLof random if each random set is already random relative to A. A is Ktrivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of AmbosSpies and Kucera [2], showing that each low for MartinLof random set is # 2 . Our class induces a natural intermediate # 3 ideal in the r.e. Turing degrees (which generates the whole class under downward closure). Answering
On the Structure of Degrees of Inferability
 Journal of Computer and System Sciences
, 1993
"... Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures the computing power of oracles. In this paper we determine the relationship between both notions. ..."
Abstract

Cited by 31 (18 self)
 Add to MetaCart
(Show Context)
Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures the computing power of oracles. In this paper we determine the relationship between both notions. 1 Introduction We consider learning of classes of recursive functions within the framework of inductive inference [21]. A recent theme is the study of inductive inference machines with oracles ([8, 10, 11, 17, 24] and tangentially [12]; cf. [10] for a comprehensive introduction and a collection of all previous results.) The basic question is how the information content of the oracle (technically: its Turing degree) relates with its learning power (technically: its inference degreedepending on the underlying inference criterion). In this paper a definitive answer is obtained for the case of recursively enumerable oracles and the case when only finitely many queries to the oracle are allo...
Lowness for Kurtz randomness
 J. Symbolic Logic
"... Abstract. We prove that degrees that are low for Kurtz randomness cannot be diagonally nonrecursive. Together with the work of Stephan and Yu [16], this proves that they coincide with the hyperimmunefree nonDNR degrees, which are also exactly the degrees that are low for weak 1genericity. We als ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We prove that degrees that are low for Kurtz randomness cannot be diagonally nonrecursive. Together with the work of Stephan and Yu [16], this proves that they coincide with the hyperimmunefree nonDNR degrees, which are also exactly the degrees that are low for weak 1genericity. We also consider Low(M,Kurtz), the class of degrees a such that every element of M is aKurtz random. These are characterised when M is the class of MartinLöf random, computably random, or Schnorr random reals. We show that Low(ML,Kurtz) coincides with the nonDNR degrees, while both Low(CR,Kurtz) and Low(Schnorr,Kurtz) are exactly the nonhigh, nonDNR degrees. 1.
Learning via Queries and Oracles
 In Proc. 8th Annu. Conf. on Comput. Learning Theory
, 1996
"... Inductive inference considers two types of queries: Queries to a teacher about the function to be learned and queries to a nonrecursive oracle. This paper combines these two types  it considers three basic models of queries to a teacher, namely QEX[Succ], QEX[!] and QEX[+], together with members ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Inductive inference considers two types of queries: Queries to a teacher about the function to be learned and queries to a nonrecursive oracle. This paper combines these two types  it considers three basic models of queries to a teacher, namely QEX[Succ], QEX[!] and QEX[+], together with membership queries to some oracle. The results for these three models of queryinference are very similar: If an oracle is already omniscient for queryinference, then it is already omniscient for EX. There is an oracle of trivial EXdegree, which allows nontrivial queryinference. Furthermore, queries to a teacher can not overcome differences between oracles and the queryinference degrees are a proper refinement of the EXdegrees. 1 Introduction One famous example of learning via queries to a teacher is the game Mastermind. The teacher first selects the code  a quadruple of colours  that should be learned. Then the learner tries to figure out the code. In each round, the learner makes one gue...
ON STRONGLY JUMP TRACEABLE REALS
"... Abstract. In this paper we show that there is no minimal bound for jump traceability. In particular, there is no single order function such that strong jump traceability is equivalent to jump traceability for that order. The uniformity of the proof method allows us to adapt the technique to showing ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we show that there is no minimal bound for jump traceability. In particular, there is no single order function such that strong jump traceability is equivalent to jump traceability for that order. The uniformity of the proof method allows us to adapt the technique to showing that the index set of the c.e. strongly jump traceables is Π 0 4complete. §1. Introduction. One of the fundamental concerns of computability theory is in understanding the relative difficulty of computational problems as measured by Turing reducubility (≤T). The equivalence classes of the preordering ≤T are called Turing degrees, and it is long recognized that the fundamental operator on the structure of the Turing degrees is the jump operator. For a set A, the
Robust Learning with Infinite Additional Information (Extended Abstract)
 EUROCOLT'97, LNCS 1208
, 1997
"... The present work investigates Gold style algorithmic learning from inputoutput examples whereby the learner has access to oracles as additional information. Furthermore this access has to be robust, that means that a single learning algorithm has to succeed with every oracle which meets a given ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The present work investigates Gold style algorithmic learning from inputoutput examples whereby the learner has access to oracles as additional information. Furthermore this access has to be robust, that means that a single learning algorithm has to succeed with every oracle which meets a given specification. The first main result considers oracles of the same Turing degree: Robust learning with any oracle from a given degree does not achieve more than learning without any additional information. The further work considers learning from function oracles which describe the whole class of...
Extensional Set Learning
 Proceedings of The Twelfth Annual Conference on Computational Learning Theory (COLT '99
, 2000
"... We investigate the model recBC of learning of r.e. sets, where changes in hypotheses only count when there is an extensional difference. We study the learnability of collections that are uniformly r.e. We prove that, in contrast with the case of uniformly recursive collections, identifiability d ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We investigate the model recBC of learning of r.e. sets, where changes in hypotheses only count when there is an extensional difference. We study the learnability of collections that are uniformly r.e. We prove that, in contrast with the case of uniformly recursive collections, identifiability does not imply recursive BCidentifiability. This answers a question of D. de Jongh. In contrast to the model of recursive identifiability, we prove that the BCmodel separates the notions of finite thickness and finite elasticity. 1 Introduction In this paper we consider a model of learning where two hypotheses about the data under consideration are considered equal when they denote the same object, i.e. when they are extensionally the same. This model was first defined for identification of functions in Feldman [6], Barzdin [3]. The first reference for this model in the context of set learning (learning from text) seems to be Osherson and Weinstein [14]. The model, and similar ones, ha...
Beyond strong jump traceability
"... Abstract. Strong jump traceability has been studied by various authors. In this paper we study a variant of strong jump traceability by looking at a partial relativization of traceability. We discover a new subclass H of the c.e. Ktrivials with some interesting properties. These sets are computatio ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Strong jump traceability has been studied by various authors. In this paper we study a variant of strong jump traceability by looking at a partial relativization of traceability. We discover a new subclass H of the c.e. Ktrivials with some interesting properties. These sets are computationally very weak, but yet contains a cuppable member. Surprisingly they cannot be constructed using cost functions, and is the first known example of a subclass of the Ktrivials which does not contain any promptly simple member. Furthermore there is a single c.e. set which caps every member of H, demonstrating that they are in fact very far away from being promptly simple. 1.
Probabilistic Learning of Indexed Families under Monotonicity Constraints  Hierarchy Results and Complexity Aspects
"... We are concerned with probabilistic identification of indexed families of uniformly recursive languages from positive data under monotonicity constraints. Thereby, we consider conservative, strongmonotonic and monotonic probabilistic learning of indexed families with respect to class comprising, c ..."
Abstract
 Add to MetaCart
We are concerned with probabilistic identification of indexed families of uniformly recursive languages from positive data under monotonicity constraints. Thereby, we consider conservative, strongmonotonic and monotonic probabilistic learning of indexed families with respect to class comprising, class preserving and proper hypothesis spaces, and investigate the probabilistic hierarchies in these learning models. In the setting of learning indexed families, probabilistic learning under monotonicity constraints is more powerful than deterministic learning under monotonicity constraints, even if the probability is close to 1, provided the learning machines are restricted to proper or class preserving hypothesis spaces. In the class comprising case, each of the investigated probabilistic hierarchies has a threshold. In particular, we can show for class comprising conservative learning as well as for learning without additional constraints that probabilistic identification and team identification are equivalent. This yields discrete probabilistic hierarchies in these cases. In the second part of our work, we investigate the relation between probabilistic learn