Results 1  10
of
17
On the Intrinsic Complexity of Learning
 Information and Computation
, 1995
"... A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion capt ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks. 2 1 Introduction Traditional studies of inductive inference have focused on illuminating various strata of learnability based on varying the definition of learnability. The research following the Valiant's PAC model [Val84] and Angluin's teacher/learner model [Ang88] paid very careful attention to calculating the complexity of the learning algorithm. We present a new view of learning, based on the notion of reduction, that captures a different perspective on learning complexity than all prior studies. Based on our prelimanary reports, Jain...
On the Impact of Forgetting on Learning Machines
 Journal of the ACM
, 1993
"... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540.
Synthesizing Learners Tolerating Computable Noisy Data
 In Proc. 9th International Workshop on Algorithmic Learning Theory, Lecture
, 1998
"... An index for an r.e. class of languages (by definition) generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) generates a sequence of decision procedures defining the family. F. Stephan's model of noisy data is employed, in which, roughly, c ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
An index for an r.e. class of languages (by definition) generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) generates a sequence of decision procedures defining the family. F. Stephan's model of noisy data is employed, in which, roughly, correct data crops up infinitely often, and incorrect data only finitely often. In a completely computable universe, all data sequences, even noisy ones, are computable. New to the present paper is the restriction that noisy data sequences be, nonetheless, computable! Studied, then, is the synthesis from indices for r.e. classes and for indexed families of languages of various kinds of noisetolerant languagelearners for the corresponding classes or families indexed, where the noisy input data sequences are restricted to being computable. Many positive results, as well as some negative results, are presented regarding the existence of such synthesizers. The main positive result is surpris...
On the Role of Search for Learning from Examples
 Journal of Experimental and Theoretical Artificial Intelligence
"... Gold [Gol67] discovered a fundamental enumeration technique, the socalled identificationbyenumeration, a simple but powerful class of algorithms for learning from examples (inductive inference). We introduce a variety of more sophisticated (and more powerful) enumeration techniques and charac ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Gold [Gol67] discovered a fundamental enumeration technique, the socalled identificationbyenumeration, a simple but powerful class of algorithms for learning from examples (inductive inference). We introduce a variety of more sophisticated (and more powerful) enumeration techniques and characterize their power. We conclude with the thesis that enumeration techniques are even universal in that each solvable learning problem in inductive inference can be solved by an adequate enumeration technique. This thesis is technically motivated and discussed. Keywords: Learning from examples, learning by search, identification by enumeration, enumeration techniques. Role of Search 1 1 Introduction The role of search, for learning from examples, is examined in a theoretical setting. Gold's seminal paper [Gol67] on inductive inference introduced a simple but powerful learning technique which became known as identificationby enumeration. Identificationbyenumeration begins with an infi...
A Survey of Inductive Inference with an Emphasis on Queries
 Complexity, Logic, and Recursion Theory, number 187 in Lecture notes in Pure and Applied Mathematics Series
, 1997
"... this paper M 0 ; M 1 ; : : : is a standard list of all Turing machines, M ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this paper M 0 ; M 1 ; : : : is a standard list of all Turing machines, M
On the Synthesis of Strategies Identifying Recursive Functions
 Proceedings of the 14th Annual Conference on Computational Learning Theory, Lecture Notes in Artificial Intelligence 2111
, 2001
"... Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. A classical learning problem in Inductive Inference consists of identifying each function of a given class of recursive functions from a finite number of its output values. Uniform learning is concerned with the design of single programs solving infinitely many classical learning problems. For that purpose the program reads a description of an identification problem and is supposed to construct a technique for solving the particular problem. As can be proved, uniform solvability of collections of solvable identification problems is rather influenced by the description of the problems than by the particular problems themselves. When prescribing a specific inference criterion (for example learning in the limit), a clever choice of descriptions allows uniform solvability of all solvable problems, whereas even the most simple classes of recursive functions are not uniformly learnable without restricting the set of possible descriptions. Furthermore the influence of the hypothesis spaces on uniform learnability is analysed. 1
On Duality in Learning and the Selection of Learning Teams
 Information and Computation
, 1996
"... Previous work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn a collection of functions. Herein we start with a class of functions and consider the learner set of all IIMs that are successful at learning the given class. Applying this perspe ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Previous work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn a collection of functions. Herein we start with a class of functions and consider the learner set of all IIMs that are successful at learning the given class. Applying this perspective to the case of team inference leads to the notion of diversification for a class of functions. This enables us to distinguish between several flavors of IIMs all of which must be represented in a team learning the given class. 2 1 Introduction All current theoretical approaches to machine learning tend to focus on a particular machine or a collection of machines and then find the class of concepts which can be learned by these machines under certain constraints defining a criterion of successful learning [AS83, OSW86]. In this paper we investigate the dual problem: Given some set of concepts, which algorithms can learn all those concepts? From [AGS89] we know that in the theory ...
Costs of general purpose learning
 Theoretical Computer Science
"... Leo Harrington surprisingly constructed a machine which can learn any computable function f according to the following criterion (called Bc ∗identification). His machine, on the successive graph points of f, outputs a corresponding infinite sequence of programs p0, p1, p2,..., and, for some i, the ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Leo Harrington surprisingly constructed a machine which can learn any computable function f according to the following criterion (called Bc ∗identification). His machine, on the successive graph points of f, outputs a corresponding infinite sequence of programs p0, p1, p2,..., and, for some i, the programs pi, pi+1, pi+2,... each compute a variant of f which differs from f at only finitely many argument places. A machine with this property is called general purpose. The sequence pi, pi+1, pi+2,... is called a final sequence. For Harrington’s general purpose machine, for distinct m and n, the finitely many argument places where pi+m fails to compute f can be very different from the finitely many argument places where pi+n fails to compute f. One would hope though, that if Harrington’s machine, or an improvement thereof, inferred the program pi+m based on the data points f(0), f(1),..., f(k), then pi+m would make very few mistakes computing f at the “near future ” arguments k + 1, k + 2,..., k + ℓ, where ℓ is reasonably large. Ideally, pi+m’s finitely many mistakes or anomalies would (mostly) occur at arguments x ≫ k, i.e., ideally, its anomalies would be well placed beyond
Goldstyle and query learning under various constraints on the target class
 IN: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, LECTURE NOTES IN ARTIFICIAL INTELLIGENCE 3734
, 2005
"... ..."