Results 1  10
of
26
Learning via Queries in ...
, 1992
"... We prove that the set of all recursive functions cannot be inferred using firstorder queries in the query language containing extra symbols [+; !]. The proof of this theorem involves a new decidability result about Presburger arithmetic which is of independent interest. Using our machinery, we ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
We prove that the set of all recursive functions cannot be inferred using firstorder queries in the query language containing extra symbols [+; !]. The proof of this theorem involves a new decidability result about Presburger arithmetic which is of independent interest. Using our machinery, we show that the set of all primitive recursive functions cannot be inferred with a bounded number of mind changes, again using queries in [+; !]. Additionally, we resolve an open question in [7] about passive versus active learning. 1) Introduction This paper presents new results in the area of query inductive inference (introduced in [7]); in addition, there are results of interest in mathematical logic. Inductive inference is the study of inductive machine learning in a theoretical framework. In query inductive inference, we study the ability of a Query Inference Machine 1 Supported, in part, by NSF grants CCR 8803641 and 9020079. 2 Also with IBM Corporation, Application Solutions...
On the Structure of Degrees of Inferability
 Journal of Computer and System Sciences
, 1993
"... Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures the computing power of oracles. In this paper we determine the relationship between both notions. ..."
Abstract

Cited by 32 (19 self)
 Add to MetaCart
Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures the computing power of oracles. In this paper we determine the relationship between both notions. 1 Introduction We consider learning of classes of recursive functions within the framework of inductive inference [21]. A recent theme is the study of inductive inference machines with oracles ([8, 10, 11, 17, 24] and tangentially [12]; cf. [10] for a comprehensive introduction and a collection of all previous results.) The basic question is how the information content of the oracle (technically: its Turing degree) relates with its learning power (technically: its inference degreedepending on the underlying inference criterion). In this paper a definitive answer is obtained for the case of recursively enumerable oracles and the case when only finitely many queries to the oracle are allo...
Learning Recursive Functions from Approximations
, 1995
"... Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional (algorithmic) information about f . Specifically considered, as such approximate additional informatio ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional (algorithmic) information about f . Specifically considered, as such approximate additional information about f , are Rose's frequency computations for f and several natural generalizations from the literature, each generalization involving programs for restricted trees of recursive functions which have f as a branch. Considered as the types of trees are those with bounded variation, bounded width, and bounded rank. For the case of learning final correct programs for recursive functions, EX learning, where the additional information involves frequency computations, an insightful and interestingly complex combinatorial characterization of learning power is presented as a function of the frequency parameters. For EX learning (as well as for BClearning, where a final sequence of cor...
Computational Limits on Team Identification of Languages
, 1993
"... A team of learning machines is essentially a multiset of learning machines. ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
A team of learning machines is essentially a multiset of learning machines.
On the Impact of Forgetting on Learning Machines
 Journal of the ACM
, 1993
"... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540.
Training Sequences
"... this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence that learning how to learn is viable for computers and this paper proves that doing so is the only way possible for computers to make certain inferences.
On Aggregating Teams of Learning Machines
 Theoretical Computer Science A
, 1994
"... The present paper studies the problem of when a team of learning machines can be aggregated into a single learning machine without any loss in learning power. The main results concern aggregation ratios for vacillatory identification of languages from texts. For a positiveinteger n,amachine is said ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
The present paper studies the problem of when a team of learning machines can be aggregated into a single learning machine without any loss in learning power. The main results concern aggregation ratios for vacillatory identification of languages from texts. For a positiveinteger n,amachine is said to TxtFex n identify a language L just in case the machine converges to up to n grammars for L on any text for L.For such identification criteria, the aggregation ratio is derived for the n = 2 case. It is shown that the collection of languages that can be TxtFex 2 identified by teams with success ratio greater than 5=6 are the same as those collections of languages that can be TxtFex 2  identified by a single machine. It is also established that 5=6 is indeed the cutoff point by showing that there are collections of languages that can be TxtFex 2 identified bya team employing 6 machines, at least 5 of which are required to be successful, but cannot be TxtFex 2 identified byany single machine. Additionally, aggregation ratios are also derived for finite identification of languages from positive data and for numerous criteria involving language learning from both positive and negative data.
Recursion Theoretic Models of Learning: Some Results and Intuitions
 Annals of Mathematics and Artificial Intelligence
, 1995
"... View of Learning To implement a program that somehow "learns" it is neccessary to fix a set of concepts to be learned and develop a representation for the concepts and examples of the concepts. In order to investigate general properties of machine learning it is neccesary to work in as representati ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
View of Learning To implement a program that somehow "learns" it is neccessary to fix a set of concepts to be learned and develop a representation for the concepts and examples of the concepts. In order to investigate general properties of machine learning it is neccesary to work in as representation independent fashion as possible. In this work, we consider machines that learn programs for recursive functions. Several authors have argued that such studies are general enough to include a wide array of learning situations [2,3,22,23,24]. For example, a behavior to be learned can be modeled as a set of stimulus and response pairs. Assuming that any behavior associates only one response to each possible stimulus, behaviors can be viewed as functions from stimuli to responses. Some behaviors, such as anger, are not easily modeled as functions. Our primary interest, however, concerns the learning of fundamental behaviors such as reading (mapping symbols to sounds), recognition (mapping pa...
Trees and Learning
 Proceedings of the Ninth Conference on Computational Learning Theory (COLT) ACMPress
, 1996
"... We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of i ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of isolated branches on a single tree. We discuss learning with additional information where the learner receives an index for a strongly recursive tree such that the function to be learned is isolated on this tree. We show that EXlearning with this type of additional information is strictly more powerful than EXlearning. 1 Introduction Inductive inference [1, 2, 4, 6, 10] deals with learning classes of recursive functions in the limit under certain convergence constraints. The most general setting is that of behaviorally correct learning (BC): for each prefix f(0)f(1) : : : f(n) of the recursive function f , the learner guesses a program for f ; the learner succeeds if Universit¨at Heidel...