Results 1  10
of
10
Computational Limitations on Learning from Examples
 Journal of the ACM
, 1988
"... Abstract. The computational complexity of learning Boolean concepts from examples is investigated. It is shown for various classes of concept representations that these cannot be learned feasibly in a distributionfree sense unless R = NP. These classes include (a) disjunctions of two monomials, (b) ..."
Abstract

Cited by 214 (10 self)
 Add to MetaCart
Abstract. The computational complexity of learning Boolean concepts from examples is investigated. It is shown for various classes of concept representations that these cannot be learned feasibly in a distributionfree sense unless R = NP. These classes include (a) disjunctions of two monomials, (b) Boolean threshold functions, and (c) Boolean formulas in which each variable occurs at most once. Relationships between learning of heuristics and finding approximate solutions to NPhard optimization problems are given. Categories and Subject Descriptors: F. 1.1 [Computation by Abstract Devices]: Models of Computationrelations among models; F. 1.2 [Computation by Abstract Devices]: Modes of Computationprobabilistic computation; F. 1.3 [Computation by Abstract Devices]: Complexity Classesreducibility and completeness; 1.2.6 [Artificial Intelligence]: Learningconcept learning; induction
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
On identification by teams and probabilistic machines
 Lecture Notes in Artificial Intelligence
, 1995
"... ..."
(Show Context)
Inferring answers to queries
, 2007
"... One focus of inductive inference is to infer a program for a function f from observations or queries about f. We propose a new line of research which examines the question of inferring the answers to queries. For a given class of computable functions, we consider the learning (in the limit) of prope ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
One focus of inductive inference is to infer a program for a function f from observations or queries about f. We propose a new line of research which examines the question of inferring the answers to queries. For a given class of computable functions, we consider the learning (in the limit) of properties of these functions that can be captured by queries formulated in a logical language L. We study the inference types that arise in this context. Of particular interest is a comparison between the learning of properties and the learning of programs. Our results suggest that these two types of learning are incomparable. In addition, our techniques can be used to prove a general lemma about query inference [19]. We show that I ⊂ J ⇒ QI(L) ⊂ QJ (L) for many standard inference types I, J and many query languages L. Hence any separation that holds between these inference types also holds between the corresponding query inference types. One interesting consequence is that [24, 49]QEX 0 ([Succ, <] 2) − [2, 4]QEX 0 ([Succ, <] 2) � = ∅.
Learning Recursive Functions: A Survey
, 2008
"... Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the m ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the mode of convergence, the requirements intermediate hypotheses have to fulfill, the set of allowed learning strategies, the source of information available to the learner during the learning process, the set of admissible hypothesis spaces, and the learning goals. A considerable amount of work done in this field has been devoted to the characterization of function classes that can be learned in a given model, the influence of natural, intuitive postulates on the resulting learning power, the incorporation of randomness into the learning process, the complexity of learning, among others. On the occasion of Rolf Wiehagen’s 60th birthday, the last four decades of research in that area are surveyed, with a special focus on Rolf Wiehagen’s work, which has made him one of the most influential scientists in the theory of learning recursive functions.
Team Learning of Computable Languages
"... A team of learning machines is a multiset of learning machines. A team is said to successfully learn a concept just in case each member of some nonempty subset, of predetermined size, of the team learns the concept. Team learning of languages may be viewed as a suitable theoretical model for studyin ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
A team of learning machines is a multiset of learning machines. A team is said to successfully learn a concept just in case each member of some nonempty subset, of predetermined size, of the team learns the concept. Team learning of languages may be viewed as a suitable theoretical model for studying computational limits on the use of multiple heuristics in learning from examples. Team learning of recursively enumerable languages has been studied extensively. However, it may be argued that from a practical point of view all languages of interest are computable. This paper gives theoretical results about team learnability of computable (recursive) languages. These results are mainly about two issues: redundancy and aggregation. The issue of redundancy deals with the impact of increasing the size of a team and increasing the number of machines required to be successful. The issue of aggregation deals with conditions under which a team may be replaced by a single machine without any loss in learning ability. The learning scenarios considered are: (a) Identification in the limit of grammars for computable languages. (b) Identification in the limit of decision procedures for computable languages. (c) Identification in the limit of grammars for indexed families of computable languages. (d) Identification in the limit of grammars for indexed families with a recursively enumerable class of grammars for the family as the hypothesis space. Scenarios that can be modeled by team learning are also presented. 1
Inductive Inference of Approximations for Recursive Concepts
, 2005
"... This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both pos ..."
Abstract
 Add to MetaCart
This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both positive and negative data is distinguished. The following learning models are studied: learning in the limit, finite identification, setdriven learning, conservative inference, and behaviorally correct learning. The attention is focused on the case that the number of allowed anomalies is finite but not a priori bounded. However, results for the special case of learning with an a priori bounded number of anomalies are presented, too. Characterizations of the learning models with anomalies in terms of finite telltale sets are provided. The observed varieties in the degree of recursiveness of the relevant telltale sets are already sufficient to quantify the differences in the corresponding learning models with anomalies. Finally, a complete picture concerning the relations of all models of learning with and without anomalies mentioned above is derived.
Learning Approximations of Recursive Concepts
, 2001
"... This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both ..."
Abstract
 Add to MetaCart
This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both positive and negative data is distinguished. The following learning models are studied: learning in the limit, finite identification, setdriven learning, conservative inference, and behaviorally correct learning. The attention is focused on the case that the number of allowed anomalies is finite but not a priori bounded. However, results for the special case of learning with an a priori bounded number of anomalies are presented, too. Characterizations of the learning models with anomalies in terms of finite telltale sets are provided. The observed varieties in the degree of recursiveness of the relevant telltale sets are already sufficient to quantify the differences in the corr...
Probabilistic Learning of Indexed Families under Monotonicity Constraints  Hierarchy Results and Complexity Aspects
"... We are concerned with probabilistic identification of indexed families of uniformly recursive languages from positive data under monotonicity constraints. Thereby, we consider conservative, strongmonotonic and monotonic probabilistic learning of indexed families with respect to class comprising, c ..."
Abstract
 Add to MetaCart
We are concerned with probabilistic identification of indexed families of uniformly recursive languages from positive data under monotonicity constraints. Thereby, we consider conservative, strongmonotonic and monotonic probabilistic learning of indexed families with respect to class comprising, class preserving and proper hypothesis spaces, and investigate the probabilistic hierarchies in these learning models. In the setting of learning indexed families, probabilistic learning under monotonicity constraints is more powerful than deterministic learning under monotonicity constraints, even if the probability is close to 1, provided the learning machines are restricted to proper or class preserving hypothesis spaces. In the class comprising case, each of the investigated probabilistic hierarchies has a threshold. In particular, we can show for class comprising conservative learning as well as for learning without additional constraints that probabilistic identification and team identification are equivalent. This yields discrete probabilistic hierarchies in these cases. In the second part of our work, we investigate the relation between probabilistic learn