Results 1  10
of
19
Biometric identification
 Communications of the ACM
, 2000
"... Identification of grammars (r. e. indices) for recursively enumerable languages from positive data by algorithmic devices is a well studied problem in learning theory. The present paper considers identification of r. e. languages by machines that have access to membership oracles for noncomputable s ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
Identification of grammars (r. e. indices) for recursively enumerable languages from positive data by algorithmic devices is a well studied problem in learning theory. The present paper considers identification of r. e. languages by machines that have access to membership oracles for noncomputable sets. It is shown that for any set A there exists another set B such that the collections of r. e. languages that can be identified by machines with access to a membership oracle for B is strictly larger than the collections of r. e. languages that can be identified by machines with access to a membership oracle for A. In other words, there is no maximal inference degree for language identification.
Learning via Queries and Oracles
 In Proc. 8th Annu. Conf. on Comput. Learning Theory
, 1996
"... Inductive inference considers two types of queries: Queries to a teacher about the function to be learned and queries to a nonrecursive oracle. This paper combines these two types  it considers three basic models of queries to a teacher, namely QEX[Succ], QEX[!] and QEX[+], together with members ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Inductive inference considers two types of queries: Queries to a teacher about the function to be learned and queries to a nonrecursive oracle. This paper combines these two types  it considers three basic models of queries to a teacher, namely QEX[Succ], QEX[!] and QEX[+], together with membership queries to some oracle. The results for these three models of queryinference are very similar: If an oracle is already omniscient for queryinference, then it is already omniscient for EX. There is an oracle of trivial EXdegree, which allows nontrivial queryinference. Furthermore, queries to a teacher can not overcome differences between oracles and the queryinference degrees are a proper refinement of the EXdegrees. 1 Introduction One famous example of learning via queries to a teacher is the game Mastermind. The teacher first selects the code  a quadruple of colours  that should be learned. Then the learner tries to figure out the code. In each round, the learner makes one gue...
Robust Learning Aided by Context
 In Proceedings of the Eleventh Annual Conference on Computational Learning Theory
, 1998
"... Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapilla ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapillai, and Wiehagen have provided mathematical justification for this phenomenon in the inductive inference framework. However, their proofs rely heavily on selfreferential coding tricks, that is, they directly code the solution of the learning problem into the context. Fulk has shown that for the Ex and Bcanomaly hierarchies, such results, which rely on selfreferential coding tricks, may not hold robustly. In this work we analyze robust versions of learning aided by context and show that  in contrast to Fulk's result above  the robust versions of This work was carried out while J. Case, S. Jain, M. Ott, and F. Stephan were visiting the School of Computer Science and Engineering at ...
Noisy Inference and Oracles
, 1996
"... A learner noisily infers a function or set, if every correct item is presented infinitely often while in addition some incorrect data ("noise") is presented a finite number of times. It is shown that learning from a noisy informant is equal to finite learning with Koracle from a usual informant. ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A learner noisily infers a function or set, if every correct item is presented infinitely often while in addition some incorrect data ("noise") is presented a finite number of times. It is shown that learning from a noisy informant is equal to finite learning with Koracle from a usual informant. This result has several variants for learning from text and using different oracles. Furthermore, partial identification of all r.e. sets can cope also with noisy input.
Trees and Learning
 Proceedings of the Ninth Conference on Computational Learning Theory (COLT) ACMPress
, 1996
"... We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of i ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of isolated branches on a single tree. We discuss learning with additional information where the learner receives an index for a strongly recursive tree such that the function to be learned is isolated on this tree. We show that EXlearning with this type of additional information is strictly more powerful than EXlearning. 1 Introduction Inductive inference [1, 2, 4, 6, 10] deals with learning classes of recursive functions in the limit under certain convergence constraints. The most general setting is that of behaviorally correct learning (BC): for each prefix f(0)f(1) : : : f(n) of the recursive function f , the learner guesses a program for f ; the learner succeeds if Universit¨at Heidel...
Extensional Set Learning
 Proceedings of The Twelfth Annual Conference on Computational Learning Theory (COLT '99
, 2000
"... We investigate the model recBC of learning of r.e. sets, where changes in hypotheses only count when there is an extensional difference. We study the learnability of collections that are uniformly r.e. We prove that, in contrast with the case of uniformly recursive collections, identifiability d ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We investigate the model recBC of learning of r.e. sets, where changes in hypotheses only count when there is an extensional difference. We study the learnability of collections that are uniformly r.e. We prove that, in contrast with the case of uniformly recursive collections, identifiability does not imply recursive BCidentifiability. This answers a question of D. de Jongh. In contrast to the model of recursive identifiability, we prove that the BCmodel separates the notions of finite thickness and finite elasticity. 1 Introduction In this paper we consider a model of learning where two hypotheses about the data under consideration are considered equal when they denote the same object, i.e. when they are extensionally the same. This model was first defined for identification of functions in Feldman [6], Barzdin [3]. The first reference for this model in the context of set learning (learning from text) seems to be Osherson and Weinstein [14]. The model, and similar ones, ha...
Robust Learning with Infinite Additional Information (Extended Abstract)
 EUROCOLT'97, LNCS 1208
, 1997
"... The present work investigates Gold style algorithmic learning from inputoutput examples whereby the learner has access to oracles as additional information. Furthermore this access has to be robust, that means that a single learning algorithm has to succeed with every oracle which meets a given ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The present work investigates Gold style algorithmic learning from inputoutput examples whereby the learner has access to oracles as additional information. Furthermore this access has to be robust, that means that a single learning algorithm has to succeed with every oracle which meets a given specification. The first main result considers oracles of the same Turing degree: Robust learning with any oracle from a given degree does not achieve more than learning without any additional information. The further work considers learning from function oracles which describe the whole class of...
Probabilistic inductive inference: a survey
 Theoretical Computer Science
, 2001
"... Inductive inference is a recursiontheoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and m ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Inductive inference is a recursiontheoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results. 1
Counting Extensional Differences in BCLearning
 Proceedings of the 5th International Colloquium on Grammatical Inference (ICGI 2000), Springer Lecture Notes in A. I. 1891
, 2000
"... . Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from the literature and discuss confidence, team learning, and finitely defective hypotheses. Among other things, we prove that there is a tradeoff between the number of semantic mind changes and the number of anomalies in the hypotheses. We also discuss consequences for language learning. In particular we show that, in contrast to the case of function learning, the family of classes that are confidently BClearnable from text is not closed under finite unions. Keywords. Models of grammar induction, inductive inference, behaviourally correct learning. 1 Introduction Gold [10] introduced an abstract model of learning computable functions, where a learner receives increasing amounts of data ...