Results 1 
8 of
8
Computational Limitations on Learning from Examples
 Journal of the ACM
, 1988
"... Abstract. The computational complexity of learning Boolean concepts from examples is investigated. It is shown for various classes of concept representations that these cannot be learned feasibly in a distributionfree sense unless R = NP. These classes include (a) disjunctions of two monomials, (b) ..."
Abstract

Cited by 192 (10 self)
 Add to MetaCart
Abstract. The computational complexity of learning Boolean concepts from examples is investigated. It is shown for various classes of concept representations that these cannot be learned feasibly in a distributionfree sense unless R = NP. These classes include (a) disjunctions of two monomials, (b) Boolean threshold functions, and (c) Boolean formulas in which each variable occurs at most once. Relationships between learning of heuristics and finding approximate solutions to NPhard optimization problems are given. Categories and Subject Descriptors: F. 1.1 [Computation by Abstract Devices]: Models of Computationrelations among models; F. 1.2 [Computation by Abstract Devices]: Modes of Computationprobabilistic computation; F. 1.3 [Computation by Abstract Devices]: Complexity Classesreducibility and completeness; 1.2.6 [Artificial Intelligence]: Learningconcept learning; induction
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
On identification by teams and probabilistic machines
 Lecture Notes in Artificial Intelligence
, 1995
"... ..."
Team Learning of Computable Languages
"... A team of learning machines is a multiset of learning machines. A team is said to successfully learn a concept just in case each member of some nonempty subset, of predetermined size, of the team learns the concept. Team learning of languages may be viewed as a suitable theoretical model for studyin ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A team of learning machines is a multiset of learning machines. A team is said to successfully learn a concept just in case each member of some nonempty subset, of predetermined size, of the team learns the concept. Team learning of languages may be viewed as a suitable theoretical model for studying computational limits on the use of multiple heuristics in learning from examples. Team learning of recursively enumerable languages has been studied extensively. However, it may be argued that from a practical point of view all languages of interest are computable. This paper gives theoretical results about team learnability of computable (recursive) languages. These results are mainly about two issues: redundancy and aggregation. The issue of redundancy deals with the impact of increasing the size of a team and increasing the number of machines required to be successful. The issue of aggregation deals with conditions under which a team may be replaced by a single machine without any loss in learning ability. The learning scenarios considered are: (a) Identification in the limit of grammars for computable languages. (b) Identification in the limit of decision procedures for computable languages. (c) Identification in the limit of grammars for indexed families of computable languages. (d) Identification in the limit of grammars for indexed families with a recursively enumerable class of grammars for the family as the hypothesis space. Scenarios that can be modeled by team learning are also presented. 1
Inductive Inference of Approximations for Recursive Concepts
, 2005
"... This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both pos ..."
Abstract
 Add to MetaCart
This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both positive and negative data is distinguished. The following learning models are studied: learning in the limit, finite identification, setdriven learning, conservative inference, and behaviorally correct learning. The attention is focused on the case that the number of allowed anomalies is finite but not a priori bounded. However, results for the special case of learning with an a priori bounded number of anomalies are presented, too. Characterizations of the learning models with anomalies in terms of finite telltale sets are provided. The observed varieties in the degree of recursiveness of the relevant telltale sets are already sufficient to quantify the differences in the corresponding learning models with anomalies. Finally, a complete picture concerning the relations of all models of learning with and without anomalies mentioned above is derived.
Learning Approximations of Recursive Concepts
, 2001
"... This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both ..."
Abstract
 Add to MetaCart
This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both positive and negative data is distinguished. The following learning models are studied: learning in the limit, finite identification, setdriven learning, conservative inference, and behaviorally correct learning. The attention is focused on the case that the number of allowed anomalies is finite but not a priori bounded. However, results for the special case of learning with an a priori bounded number of anomalies are presented, too. Characterizations of the learning models with anomalies in terms of finite telltale sets are provided. The observed varieties in the degree of recursiveness of the relevant telltale sets are already sufficient to quantify the differences in the corr...
Learning Recursive Functions: A Survey
, 2008
"... Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the m ..."
Abstract
 Add to MetaCart
Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the mode of convergence, the requirements intermediate hypotheses have to fulfill, the set of allowed learning strategies, the source of information available to the learner during the learning process, the set of admissible hypothesis spaces, and the learning goals. A considerable amount of work done in this field has been devoted to the characterization of function classes that can be learned in a given model, the influence of natural, intuitive postulates on the resulting learning power, the incorporation of randomness into the learning process, the complexity of learning, among others. On the occasion of Rolf Wiehagen’s 60th birthday, the last four decades of research in that area are surveyed, with a special focus on Rolf Wiehagen’s work, which has made him one of the most influential scientists in the theory of learning recursive functions.