Results 1  10
of
15
On the Intrinsic Complexity of Learning
 Information and Computation
, 1995
"... A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion capt ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks. 2 1 Introduction Traditional studies of inductive inference have focused on illuminating various strata of learnability based on varying the definition of learnability. The research following the Valiant's PAC model [Val84] and Angluin's teacher/learner model [Ang88] paid very careful attention to calculating the complexity of the learning algorithm. We present a new view of learning, based on the notion of reduction, that captures a different perspective on learning complexity than all prior studies. Based on our prelimanary reports, Jain...
Ordinal Mind Change Complexity of Language Identification
"... The approach of ordinal mind change complexity, introduced by Freivalds and Smith, uses (notations for) constructive ordinals to bound the number of mind changes made by a learning machine. This approach provides a measure of the extent to which a learning machine has to keep revising its estimate o ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
The approach of ordinal mind change complexity, introduced by Freivalds and Smith, uses (notations for) constructive ordinals to bound the number of mind changes made by a learning machine. This approach provides a measure of the extent to which a learning machine has to keep revising its estimate of the number of mind changes it will make before converging to a correct hypothesis for languages in the class being learned. Recently, this notion, which also yields a measure for the difficulty of learning a class of languages, has been used to analyze the learnability of rich concept classes. The present paper further investigates the utility of ordinal mind change complexity. It is shown that for identification from both positive and negative data and n ≥ 1, the ordinal mind change complexity of the class of languages formed by unions of up to n + 1 pattern languages is only ω ×O notn(n) (where notn(n) is a notation for n, ω is a notation for the least limit ordinal and ×O represents ordinal multiplication). This result nicely extends an observation of Lange and Zeugmann
The intrinsic complexity of language identification
 Journal of Computer and System Sciences
, 1996
"... A new investigation of the complexity of language identification is undertaken using the notion of reduction from recursion theory and complexity theory. The approach, referred to as the intrinsic complexity of language identification, employs notions of ‘weak ’ and ‘strong ’ reduction between learn ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
A new investigation of the complexity of language identification is undertaken using the notion of reduction from recursion theory and complexity theory. The approach, referred to as the intrinsic complexity of language identification, employs notions of ‘weak ’ and ‘strong ’ reduction between learnable classes of languages. The intrinsic complexity of several classes is considered and the results agree with the intuitive difficulty of learning these classes. Several complete classes are shown for both the reductions and it is also established that the weak and strong reductions are distinct. An interesting result is that the self referential class of Wiehagen in which the minimal element of every language is a grammar for the language and the class of pattern languages introduced by Angluin are equivalent in the strong sense. This study has been influenced by a similar treatment of function identification by Freivalds, Kinber, and Smith. 1
Elementary formal systems, intrinsic complexity, and procrastination
 Information and Computation
, 1997
"... Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data. Examples of these classes are Angluin’s pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by lengthbounded e ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data. Examples of these classes are Angluin’s pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by lengthbounded elementary formal systems studied by Shinohara. The present paper employs two distinct bodies of abstract studies in the inductive inference literature to analyze the learnability of these concrete classes. The first approach, introduced by Freivalds and Smith, uses constructive ordinals to bound the number of mind changes. ω denotes the first limit ordinal. An ordinal mind change bound of ω means that identification can be carried out by a learner that after examining some element(s) of the language announces an upper bound on the number of mind changes it will make before converging; a bound of ω · 2 means that the learner reserves the right to revise this upper bound once; a bound of ω · 3 means the learner reserves the right to revise this upper bound twice, and so on. A bound of ω 2 means that identification can be carried out by a learner that announces an upper bound on the number of times it may revise its conjectured upper bound on the number of mind changes. It is shown in the present paper that the ordinal mind change complexity for identification of languages formed by unions of up to n pattern languages is ω n. It is
Language learning from texts: Degrees of intrinsic complexity and their characterizations
 In: Proceedings of the 13th Annual Conference on Computational Learning Theory
, 2000
"... This paper deals with two problems: 1) what makes languages to be learnable in the limit by natural strategies of varying hardness; 2) what makes classes of languages to be the hardest ones to learn. To quantify hardness of learning, we use intrinsic complexity based on reductions between learning p ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
This paper deals with two problems: 1) what makes languages to be learnable in the limit by natural strategies of varying hardness; 2) what makes classes of languages to be the hardest ones to learn. To quantify hardness of learning, we use intrinsic complexity based on reductions between learning problems. Two types of reductions are considered: weak reductions mapping texts (representations of languages) to texts, and strong reductions mapping languages to languages. For both types of reductions, characterizations of complete (hardest) classes in terms of their algorithmic and topological potentials have been obtained. To characterize the strong complete degree, we discovered a new and natural complete class capable of “coding ” any learning problem using density of the set of rational numbers. We have also discovered and characterized rich hierarchies of degrees of complexity based on “core ” natural learning problems. The classes in these hierarchies contain “multidimensional ” languages, where the information learned from one dimension aids to learn other dimensions. In one formalization of this idea, the grammars learned from the dimensions 1, 2,..., k specify the “subspace ” for the dimension k + 1, while the learning strategy for every dimension is predefined. In our other formalization, a “pattern ” learned from the dimension k specifies the learning strategy for the dimension k + 1. A number of open problems is discussed. 3 1
On the intrinsic complexity of learning recursive functions
 In Proceedings of the Twelfth Annual Conference on Computational Learning Theory
, 1999
"... The intrinsic complexity of learning compares the difficulty of learning classes of objects by using some reducibility notion. For several types of learning recursive functions, both natural complete classes are exhibited and necessary and sufficient conditions for completeness are derived. Informal ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The intrinsic complexity of learning compares the difficulty of learning classes of objects by using some reducibility notion. For several types of learning recursive functions, both natural complete classes are exhibited and necessary and sufficient conditions for completeness are derived. Informally, a class is complete iff both its topological structure is highly complex while its algorithmic structure is easy. Some selfdescribing classes turn out to be complete. Furthermore, the structure of the intrinsic complexity is shown to be much richer than the structure of the mind change complexity, though in general, intrinsic complexity and mind change complexity can behave “orthogonally”. 1.
Transformations That Preserve Learnability
 Algorithmic Learning Theory: Seventh International Workshop (ALT ’96), volume 1160 of Lecture Notes in Artificial Intelligence
, 1996
"... . We consider transformations (performed by general recursive operators) mapping recursive functions into recursive functions. These transformations can be considered as mapping sets of recursive functions into sets of recursive functions. A transformation is said to be preserving the identicati ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
. We consider transformations (performed by general recursive operators) mapping recursive functions into recursive functions. These transformations can be considered as mapping sets of recursive functions into sets of recursive functions. A transformation is said to be preserving the identication type I, if the transformation always maps Iidentiable sets into Iidentiable sets. There are transformations preserving FIN but not EX, and there are transformations preserving EX but not FIN. However, transformations preserving EX i always preserve EX j for j < i. 1 Introduction In his academic lecture (1872) before getting professorship in Erlangen university Felix Klein (18491925) designed an astonishing program how to remake geometry. The listeners were confused and even shocked. In this program (nowadays known as Erlangen program) geometry was considered as \what remains invariant under motion transformations". It seemed unbelievable that a geometry textbook could have no ...
An approach to intrinsic complexity of uniform learning
"... Inductive inference is concerned with algorithmic learning of recursive functions. In the model of learning in the limit a learner successful for a class of recursive functions must eventually find a program for any function in the class from a gradually growing sequence of its values. This approach ..."
Abstract
 Add to MetaCart
Inductive inference is concerned with algorithmic learning of recursive functions. In the model of learning in the limit a learner successful for a class of recursive functions must eventually find a program for any function in the class from a gradually growing sequence of its values. This approach is generalized in uniform learning, where the problem of synthesizing a successful learner for a class of functions from a description of this class is considered. A common reductionbased approach for comparing the complexity of learning problems in inductive inference is intrinsic complexity. Informally, if a learning problem (a class of recursive functions) A is reducible to a learning problem (a class of recursive functions) B, then a solution for B can be transformed into a solution for A. In the context of intrinsic complexity, reducibility between two classes is expressed via recursive operators transforming target functions in one direction and sequences of corresponding hypotheses in the other direction. The present paper is concerned with intrinsic complexity of uniform learning. The relevant notions are adapted and illustrated by several examples. Characterisations of complete classes finally allow for various insightful conclusions. The connection to intrinsic complexity of nonuniform learning is revealed within several analogies concerning first the structure of complete classes and second the general interpretation of the notion of intrinsic complexity. Key words: inductive inference, learning theory, recursion theory
Intrinsic Complexity of Uniform Learning
"... Abstract. Inductive inference is concerned with algorithmic learning of recursive functions. In the model of learning in the limit a learner successful for a class of recursive functions must eventually find a program for any function in the class from a gradually growing sequence of its values. Thi ..."
Abstract
 Add to MetaCart
Abstract. Inductive inference is concerned with algorithmic learning of recursive functions. In the model of learning in the limit a learner successful for a class of recursive functions must eventually find a program for any function in the class from a gradually growing sequence of its values. This approach is generalized in uniform learning, where the problem of synthesizing a successful learner for a class of functions from a description of this class is considered. A common reductionbased approach for comparing the complexity of learning problems in inductive inference is intrinsic complexity. In this context, reducibility between two classes is expressed via recursive operators transforming target functions in one direction and sequences of corresponding hypotheses in the other direction. The present paper is the first one concerned with intrinsic complexity of uniform learning. The relevant notions are adapted and illustrated by several examples. Characterizations of complete classes finally allow for various insightful conclusions. The connection to intrinsic complexity of nonuniform learning is revealed within several analogies concerning firstly the role and structure of complete classes and secondly the general interpretation of the notion of intrinsic complexity. 1