Results 1 
9 of
9
Learning Recursive Functions from Approximations
, 1995
"... Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional (algorithmic) information about f . Specifically considered, as such approximate additional informatio ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional (algorithmic) information about f . Specifically considered, as such approximate additional information about f , are Rose's frequency computations for f and several natural generalizations from the literature, each generalization involving programs for restricted trees of recursive functions which have f as a branch. Considered as the types of trees are those with bounded variation, bounded width, and bounded rank. For the case of learning final correct programs for recursive functions, EX learning, where the additional information involves frequency computations, an insightful and interestingly complex combinatorial characterization of learning power is presented as a function of the frequency parameters. For EX learning (as well as for BClearning, where a final sequence of cor...
Robust Learning Aided by Context
 In Proceedings of the Eleventh Annual Conference on Computational Learning Theory
, 1998
"... Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapilla ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapillai, and Wiehagen have provided mathematical justification for this phenomenon in the inductive inference framework. However, their proofs rely heavily on selfreferential coding tricks, that is, they directly code the solution of the learning problem into the context. Fulk has shown that for the Ex and Bcanomaly hierarchies, such results, which rely on selfreferential coding tricks, may not hold robustly. In this work we analyze robust versions of learning aided by context and show that  in contrast to Fulk's result above  the robust versions of This work was carried out while J. Case, S. Jain, M. Ott, and F. Stephan were visiting the School of Computer Science and Engineering at ...
Robust Behaviourally Correct Learning
 Information and Computation
, 1998
"... Intuitively, a class of functions is robustly learnable if not only the class itself, but also all of the transformations of the class under natural transformations (such as via general recursive operators) are learnable. Fulk [Ful90] showed the existence of a nontrivial class which is robustly lea ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Intuitively, a class of functions is robustly learnable if not only the class itself, but also all of the transformations of the class under natural transformations (such as via general recursive operators) are learnable. Fulk [Ful90] showed the existence of a nontrivial class which is robustly learnable under the criterion Ex. However, several of the hierarchies (such as the anomaly hierarchies for Ex and Bc) do not stand robustly. Fulk left open the question about whether Bc and Ex can be robustly separated. In this paper we resolve this question positively. 1
Machine induction without revolutionary changes in hypothesis size
 Information and Computation
, 1996
"... This paper provides a beginning study of the effects on inductive inference of paradigm shifts whose absence is approximately modeled by various formal approaches to forbidding large changes in the size of programs conjectured. One approach, called severely parsimonious, requires all the programs co ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
This paper provides a beginning study of the effects on inductive inference of paradigm shifts whose absence is approximately modeled by various formal approaches to forbidding large changes in the size of programs conjectured. One approach, called severely parsimonious, requires all the programs conjectured on the way to success to be nearly (i.e., within a recursive function of) minimal size. It is shown that this very conservative constraint allows learning infinite classes of functions, but not infinite r.e. classes of functions. Another approach, called nonrevolutionary, requires all conjectures to be nearly the same size as one another. This quite conservative constraint is, nonetheless, shown to permit learning some infinite r.e. classes of functions. Allowing up to one extra bounded size mind change towards a final program learned certainly doesn’t appear revolutionary. However, somewhat surprisingly for scientific (inductive) inference, it is shown that there are classes learnable with the nonrevolutionary constraint (respectively, with severe parsimony), up to (i + 1) mind changes, and no anomalies, which classes cannot be learned with no size constraint, an unbounded, finite number of anomalies in the final program, but with no more than i mind changes. Hence, in some cases, the possibility of one extra mind change is considerably more liberating than removal of very conservative size shift constraints. The proofs of these results are also combinatorially interesting. 1
On Duality in Learning and the Selection of Learning Teams
 Information and Computation
, 1996
"... Previous work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn a collection of functions. Herein we start with a class of functions and consider the learner set of all IIMs that are successful at learning the given class. Applying this perspe ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Previous work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn a collection of functions. Herein we start with a class of functions and consider the learner set of all IIMs that are successful at learning the given class. Applying this perspective to the case of team inference leads to the notion of diversification for a class of functions. This enables us to distinguish between several flavors of IIMs all of which must be represented in a team learning the given class. 2 1 Introduction All current theoretical approaches to machine learning tend to focus on a particular machine or a collection of machines and then find the class of concepts which can be learned by these machines under certain constraints defining a criterion of successful learning [AS83, OSW86]. In this paper we investigate the dual problem: Given some set of concepts, which algorithms can learn all those concepts? From [AGS89] we know that in the theory ...
Learning From Context Without CodingTricks
"... Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system additional related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Ve ..."
Abstract
 Add to MetaCart
Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system additional related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapillai, and Wiehagen have provided mathematical justification for this phenomenon in the inductive inference framework. However, their proofs rely heavily on selfreferential coding tricks, that is, they directly code the solution of the learning problem into the context. In this work we prove, in the inductive inference setting, that multitask learning is extremely powerful even without using obvious coding tricks. Coding tricks are avoided in the powerful sense of Fulk's notion of robust learning. Also, studied is the difficulty of the functional dependence between the intended target tasks and useful associated contexts.
Directions for Computability Theory Beyond Pure Mathematical
"... This paper begins by briefly indicating the principal, nonstandard motivations of the author for his decades of work in Computability Theory (CT), a.k.a. Recursive Function Theory. Then it discusses its proposed, general directions beyond those from pure mathematics for CT. These directions are as ..."
Abstract
 Add to MetaCart
(Show Context)
This paper begins by briefly indicating the principal, nonstandard motivations of the author for his decades of work in Computability Theory (CT), a.k.a. Recursive Function Theory. Then it discusses its proposed, general directions beyond those from pure mathematics for CT. These directions are as follows. 1. Apply CT to basic sciences, for example, biology, psychology, physics, chemistry, and economics. 2. Apply the resultant insights from 1 to philosophy and, more generally, apply CT to areas of philosophy in addition to the philosophy and foundations of mathematics. 3. Apply CT for insights into engineering and other professional fields. Lastly, this paper provides a progress report on the above nonpure mathematical directions for CT, including examples for biology, cognitive science and learning theory, philosophy of science, physics, applied machine learning, and computational complexity. Interweaved with the report are occasional remarks about the future. 1 Motivations
Universität Heidelberg
"... Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapilla ..."
Abstract
 Add to MetaCart
(Show Context)
Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapillai, and Wiehagen have provided mathematical justification for this phenomenon in the inductive inference framework. However, their proofs rely heavily on selfreferential coding tricks, that is, they directly code the solution of the learning problem into the context. Fulk has shown that for the Ex and Bcanomaly hierarchies, such results, which rely on selfreferential coding tricks, do not hold robustly. In this work we analyze robust versions of learning aided by context and show that — in contrast to Fulk’s result above — context also aids learning robustly. Also, studied is the difficulty of the functional dependence between the intended target tasks and useful associated contexts. 1
Computational learning theory: New models and . . .
, 1989
"... In the past several years, there has been a surge of interest in computational learning theorythe formal (as opposed to empirical) study of learning algorithms. One major cause for this interest was the model of probably approximately correct learning, or pac learning, introduced by Valiant in 1984 ..."
Abstract
 Add to MetaCart
In the past several years, there has been a surge of interest in computational learning theorythe formal (as opposed to empirical) study of learning algorithms. One major cause for this interest was the model of probably approximately correct learning, or pac learning, introduced by Valiant in 1984. This thesis begins by presenting a new learning algorithm for a particular problem within that model: learning submodules of the free Zmodule Zk. We prove that this algorithm achieves probable approximate correctness, and indeed, that it is within a log log factor of optimal in a related, but more stringent model of learning, online mistake bounded learning. We then proceed to examine the influence of noisy data on pac learning algorithms in general. Previously it has been shown that it is possible to tolerate large amounts of random classification noise, but only a very small amount of a very malicious sort of noise. We show that similar results can be obtained for models of noise in between the previously studied models: a large amount of malicious classification noise can be