Results 1  10
of
35
Learning via Queries in ...
, 1992
"... We prove that the set of all recursive functions cannot be inferred using firstorder queries in the query language containing extra symbols [+; !]. The proof of this theorem involves a new decidability result about Presburger arithmetic which is of independent interest. Using our machinery, we ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
We prove that the set of all recursive functions cannot be inferred using firstorder queries in the query language containing extra symbols [+; !]. The proof of this theorem involves a new decidability result about Presburger arithmetic which is of independent interest. Using our machinery, we show that the set of all primitive recursive functions cannot be inferred with a bounded number of mind changes, again using queries in [+; !]. Additionally, we resolve an open question in [7] about passive versus active learning. 1) Introduction This paper presents new results in the area of query inductive inference (introduced in [7]); in addition, there are results of interest in mathematical logic. Inductive inference is the study of inductive machine learning in a theoretical framework. In query inductive inference, we study the ability of a Query Inference Machine 1 Supported, in part, by NSF grants CCR 8803641 and 9020079. 2 Also with IBM Corporation, Application Solutions...
On the Structure of Degrees of Inferability
 Journal of Computer and System Sciences
, 1993
"... Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures the computing power of oracles. In this paper we determine the relationship between both notions. ..."
Abstract

Cited by 32 (19 self)
 Add to MetaCart
Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures the computing power of oracles. In this paper we determine the relationship between both notions. 1 Introduction We consider learning of classes of recursive functions within the framework of inductive inference [21]. A recent theme is the study of inductive inference machines with oracles ([8, 10, 11, 17, 24] and tangentially [12]; cf. [10] for a comprehensive introduction and a collection of all previous results.) The basic question is how the information content of the oracle (technically: its Turing degree) relates with its learning power (technically: its inference degreedepending on the underlying inference criterion). In this paper a definitive answer is obtained for the case of recursively enumerable oracles and the case when only finitely many queries to the oracle are allo...
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
Learning Recursive Functions from Approximations
, 1995
"... Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional (algorithmic) information about f . Specifically considered, as such approximate additional informatio ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional (algorithmic) information about f . Specifically considered, as such approximate additional information about f , are Rose's frequency computations for f and several natural generalizations from the literature, each generalization involving programs for restricted trees of recursive functions which have f as a branch. Considered as the types of trees are those with bounded variation, bounded width, and bounded rank. For the case of learning final correct programs for recursive functions, EX learning, where the additional information involves frequency computations, an insightful and interestingly complex combinatorial characterization of learning power is presented as a function of the frequency parameters. For EX learning (as well as for BClearning, where a final sequence of cor...
Computational Limits on Team Identification of Languages
, 1993
"... A team of learning machines is essentially a multiset of learning machines. ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
A team of learning machines is essentially a multiset of learning machines.
Asking Questions versus Verifiability
, 1992
"... this paper, # 0 , # 1 , # 2 , . . . denotes an acceptable programming system [17], also known as a Godel numbering of the partial recursive functions [15]. The function # e is said to be computed by the program e. ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
this paper, # 0 , # 1 , # 2 , . . . denotes an acceptable programming system [17], also known as a Godel numbering of the partial recursive functions [15]. The function # e is said to be computed by the program e.
Training Sequences
"... this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence that learning how to learn is viable for computers and this paper proves that doing so is the only way possible for computers to make certain inferences.
On Aggregating Teams of Learning Machines
 Theoretical Computer Science A
, 1994
"... The present paper studies the problem of when a team of learning machines can be aggregated into a single learning machine without any loss in learning power. The main results concern aggregation ratios for vacillatory identification of languages from texts. For a positiveinteger n,amachine is said ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
The present paper studies the problem of when a team of learning machines can be aggregated into a single learning machine without any loss in learning power. The main results concern aggregation ratios for vacillatory identification of languages from texts. For a positiveinteger n,amachine is said to TxtFex n identify a language L just in case the machine converges to up to n grammars for L on any text for L.For such identification criteria, the aggregation ratio is derived for the n = 2 case. It is shown that the collection of languages that can be TxtFex 2 identified by teams with success ratio greater than 5=6 are the same as those collections of languages that can be TxtFex 2  identified by a single machine. It is also established that 5=6 is indeed the cutoff point by showing that there are collections of languages that can be TxtFex 2 identified bya team employing 6 machines, at least 5 of which are required to be successful, but cannot be TxtFex 2 identified byany single machine. Additionally, aggregation ratios are also derived for finite identification of languages from positive data and for numerous criteria involving language learning from both positive and negative data.
Non UShaped Vacillatory and Team Learning
, 2008
"... Ushaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is most ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Ushaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mostly concerned with whether Ushaped learning behaviour may be necessary in the abstract mathematical setting of inductive inference, that is, in the computational learning theory following the framework of Gold. All notions considered are learning from text, that is, from positive data. Previous work showed that Ushaped learning behaviour is necessary for behaviourally correct learning but not for syntactically convergent, learning in the limit ( = explanatory learning). The present paper establishes the necessity for the hierarchy of classes of vacillatory learning where a behaviourally correct learner has to satisfy the additional constraint that it vacillates in the limit between at most b grammars, where b ∈ {2, 3,...,∗}. Non Ushaped vacillatory learning is shown to be restrictive: every non Ushaped vacillatorily learnable class is already learnable in the limit. Furthermore, if vacillatory learning with the parameter b = 2 is possible then non Ushaped behaviourally correct learning is also possible. But for b = 3, surprisingly, there is a class witnessing that this implication fails.