Results 1  10
of
167
Inductive Inference, DFAs and Computational Complexity
 2nd Int. Workshop on Analogical and Inductive Inference (AII
, 1989
"... This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1 ..."
Abstract

Cited by 93 (1 self)
 Add to MetaCart
(Show Context)
This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1
A Guided Tour Across the Boundaries of Learning Recursive Languages
 Lecture Notes in Artificial Intelligence
, 1994
"... The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data as well as from both, positive and negative data. We consider the influence of various monotonicity constraints to the learning process, and provide a thorough study concerning the i ..."
Abstract

Cited by 63 (30 self)
 Add to MetaCart
The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data as well as from both, positive and negative data. We consider the influence of various monotonicity constraints to the learning process, and provide a thorough study concerning the influence of several parameters. In particular, we present examples pointing to typical problems and solutions in the field. Then we provide a unifying framework for learning. Furthermore, we survey results concerning learnability in dependence on the hypothesis space, and concerning order independence. Moreover, new results dealing with the efficiency of learning are provided. First, we investigate the power of iterative learning algorithms. The second measure of efficiency studied is the number of mind changes a learning algorithm is allowed to perform. In this setting we consider the problem whether or not the monotonicity constraints introduced do influence the efficiency of learning algo...
The Power of Vacillation in Language Learning
, 1992
"... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there ..."
Abstract

Cited by 46 (13 self)
 Add to MetaCart
(Show Context)
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff...
Learning via Queries in ...
, 1992
"... We prove that the set of all recursive functions cannot be inferred using firstorder queries in the query language containing extra symbols [+; !]. The proof of this theorem involves a new decidability result about Presburger arithmetic which is of independent interest. Using our machinery, we ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
We prove that the set of all recursive functions cannot be inferred using firstorder queries in the query language containing extra symbols [+; !]. The proof of this theorem involves a new decidability result about Presburger arithmetic which is of independent interest. Using our machinery, we show that the set of all primitive recursive functions cannot be inferred with a bounded number of mind changes, again using queries in [+; !]. Additionally, we resolve an open question in [7] about passive versus active learning. 1) Introduction This paper presents new results in the area of query inductive inference (introduced in [7]); in addition, there are results of interest in mathematical logic. Inductive inference is the study of inductive machine learning in a theoretical framework. In query inductive inference, we study the ability of a Query Inference Machine 1 Supported, in part, by NSF grants CCR 8803641 and 9020079. 2 Also with IBM Corporation, Application Solutions...
Incremental concept learning for bounded data mining
 Information and Computation
, 1999
"... Important re nements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every in nite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning ma ..."
Abstract

Cited by 42 (32 self)
 Add to MetaCart
Important re nements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every in nite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning machine computes a sequence of hypotheses about the target concept from a positive presentation of it. With iterative learning, the learning machine, in making a conjecture, has access to its previous conjecture and the latest data item coming in. In kbounded examplememory inference (k is a priori xed) the learner is allowed to access, in making a conjecture, its previous hypothesis, its memory of up to k data items it has already seen, and the next element coming in. In the case of kfeedback identi cation, the learning machine, in making a conjecture, has access to its previous conjecture, the latest data item coming in, and, on the basis of this information, it can compute k items and query the database of previous data to nd out, for each of the k items, whether or not it is in the database (k is again a priori xed). In all cases, the sequence of conjectures has to converge to a hypothesis
On the Structure of Degrees of Inferability
 Journal of Computer and System Sciences
, 1993
"... Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures the computing power of oracles. In this paper we determine the relationship between both notions. ..."
Abstract

Cited by 31 (18 self)
 Add to MetaCart
(Show Context)
Degrees of inferability have been introduced to measure the learning power of inductive inference machines which have access to an oracle. The classical concept of degrees of unsolvability measures the computing power of oracles. In this paper we determine the relationship between both notions. 1 Introduction We consider learning of classes of recursive functions within the framework of inductive inference [21]. A recent theme is the study of inductive inference machines with oracles ([8, 10, 11, 17, 24] and tangentially [12]; cf. [10] for a comprehensive introduction and a collection of all previous results.) The basic question is how the information content of the oracle (technically: its Turing degree) relates with its learning power (technically: its inference degreedepending on the underlying inference criterion). In this paper a definitive answer is obtained for the case of recursively enumerable oracles and the case when only finitely many queries to the oracle are allo...
On the Intrinsic Complexity of Learning
 Information and Computation
, 1995
"... A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion capt ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
(Show Context)
A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks. 2 1 Introduction Traditional studies of inductive inference have focused on illuminating various strata of learnability based on varying the definition of learnability. The research following the Valiant's PAC model [Val84] and Angluin's teacher/learner model [Ang88] paid very careful attention to calculating the complexity of the learning algorithm. We present a new view of learning, based on the notion of reduction, that captures a different perspective on learning complexity than all prior studies. Based on our prelimanary reports, Jain...
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
Ordinal Mind Change Complexity of Language Identification
"... The approach of ordinal mind change complexity, introduced by Freivalds and Smith, uses (notations for) constructive ordinals to bound the number of mind changes made by a learning machine. This approach provides a measure of the extent to which a learning machine has to keep revising its estimate o ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
The approach of ordinal mind change complexity, introduced by Freivalds and Smith, uses (notations for) constructive ordinals to bound the number of mind changes made by a learning machine. This approach provides a measure of the extent to which a learning machine has to keep revising its estimate of the number of mind changes it will make before converging to a correct hypothesis for languages in the class being learned. Recently, this notion, which also yields a measure for the difficulty of learning a class of languages, has been used to analyze the learnability of rich concept classes. The present paper further investigates the utility of ordinal mind change complexity. It is shown that for identification from both positive and negative data and n ≥ 1, the ordinal mind change complexity of the class of languages formed by unions of up to n + 1 pattern languages is only ω ×O notn(n) (where notn(n) is a notation for n, ω is a notation for the least limit ordinal and ×O represents ordinal multiplication). This result nicely extends an observation of Lange and Zeugmann
Language Learning With Some Negative Information
, 1993
"... Gold–style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
Gold–style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In traditional Gold–style language learning, learning machines are not provided with negative information, i.e., information about the complements of the input languages. We investigate two approaches to providing small amounts of negative information and demonstrate in each case a strong resulting increase in learning power. Finally, we show that small packets of negative information also lead to increased speed of learning. This result agrees with a psycholinguistic hypothesis of McNeill correlating the availability of parental expansions with the speed of child language development.