Results 1  10
of
38
The Power of Vacillation in Language Learning
, 1992
"... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there ..."
Abstract

Cited by 46 (13 self)
 Add to MetaCart
(Show Context)
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff...
The intrinsic complexity of language identification
 Journal of Computer and System Sciences
, 1996
"... A new investigation of the complexity of language identification is undertaken using the notion of reduction from recursion theory and complexity theory. The approach, referred to as the intrinsic complexity of language identification, employs notions of ‘weak ’ and ‘strong ’ reduction between learn ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
(Show Context)
A new investigation of the complexity of language identification is undertaken using the notion of reduction from recursion theory and complexity theory. The approach, referred to as the intrinsic complexity of language identification, employs notions of ‘weak ’ and ‘strong ’ reduction between learnable classes of languages. The intrinsic complexity of several classes is considered and the results agree with the intuitive difficulty of learning these classes. Several complete classes are shown for both the reductions and it is also established that the weak and strong reductions are distinct. An interesting result is that the self referential class of Wiehagen in which the minimal element of every language is a grammar for the language and the class of pattern languages introduced by Angluin are equivalent in the strong sense. This study has been influenced by a similar treatment of function identification by Freivalds, Kinber, and Smith. 1
Language Learning With Some Negative Information
, 1993
"... Gold–style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
Gold–style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In traditional Gold–style language learning, learning machines are not provided with negative information, i.e., information about the complements of the input languages. We investigate two approaches to providing small amounts of negative information and demonstrate in each case a strong resulting increase in learning power. Finally, we show that small packets of negative information also lead to increased speed of learning. This result agrees with a psycholinguistic hypothesis of McNeill correlating the availability of parental expansions with the speed of child language development.
Computational Limits on Team Identification of Languages
, 1993
"... A team of learning machines is essentially a multiset of learning machines. ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
(Show Context)
A team of learning machines is essentially a multiset of learning machines.
Synthesizing Enumeration Techniques For Language Learning
 In Proceedings of the Ninth Annual Conference on Computational Learning Theory
, 1996
"... this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?]. ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?].
Hard Words
 LANGUAGE LEARNING AND DEVELOPMENT, 1(1), 23–64
, 2005
"... How do children acquire the meaning of words? And why are words such as know harder for learners to acquire than words such as dog or jump? We suggest that the chief limiting factor in acquiring the vocabulary of natural languages consists not in overcoming conceptual difficulties with abstract word ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
How do children acquire the meaning of words? And why are words such as know harder for learners to acquire than words such as dog or jump? We suggest that the chief limiting factor in acquiring the vocabulary of natural languages consists not in overcoming conceptual difficulties with abstract word meanings but rather in mapping these meanings onto their corresponding lexical forms. This opening premise of our position, while controversial, is shared with some prior approaches. The present discussion moves forward from there to a detailed proposal for how the mapping problem for the lexicon is solved, as well as a presentation of experimental findings that support this account. Wedescribeanoverlappingseriesofstepsthroughwhichnovices move in representing the lexical forms and phrase structures of the exposure language, a probabilistic multiplecue learning process known as syntactic bootstrapping. The machineryissetinmotionbywordtoworldpairing, aprocedureavailabletonovicesfromthe
Complexity issues for vacillatory function identification
 Information and Computation
, 1995
"... It was previously shown by Barzdin and Podnieks that one does not increase the power of learning programs for functions by allowing learning algorithms to converge to a finite set of correct programs instead of requiring them to converge to a single correct program. In this paper we define some new, ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
It was previously shown by Barzdin and Podnieks that one does not increase the power of learning programs for functions by allowing learning algorithms to converge to a finite set of correct programs instead of requiring them to converge to a single correct program. In this paper we define some new, subtle, but natural concepts of mind change complexity for function learning and show that, if one bounds this complexity for learning algorithms, then, by contrast with Barzdin and Podnieks result, there are interesting and sometimes complicated tradeoffs between these complexity bounds, bounds on the number of final correct programs, and learning power. CR Classification Number: I.2.6 (Learning – Induction). 1
Learning in the presence of inaccurate information
 in &quot;Proceedings of the 2nd Annual ACM Conference on Computational Learning Theory
, 1989
"... The present paper considers the effects of introducing inaccuracies in a learner’s environment in Gold’s learning model of identification in the limit. Three kinds of inaccuracies are considered: presence of spurious data is modeled as learning from a noisy environment, missing data is modeled as ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
The present paper considers the effects of introducing inaccuracies in a learner’s environment in Gold’s learning model of identification in the limit. Three kinds of inaccuracies are considered: presence of spurious data is modeled as learning from a noisy environment, missing data is modeled as learning from incomplete environment, and the presence of a mixture of both spurious and missing data is modeled as learning from imperfect environment. Two learning domains are considered, namely, identification of programs from graphs of computable functions and identification of grammars from positive data about recursively enumerable languages. Many hierarchies and tradeoffs resulting from the interplay between the number of errors allowed in the final hypotheses, the number of inaccuracies in the data, the types of inaccuracies, and the type of success criteria are derived. An interesting result is that in the context of function learning, incomplete data is strictly worse for learning than noisy data. 1
On Aggregating Teams of Learning Machines
 Theoretical Computer Science A
, 1994
"... The present paper studies the problem of when a team of learning machines can be aggregated into a single learning machine without any loss in learning power. The main results concern aggregation ratios for vacillatory identification of languages from texts. For a positiveinteger n,amachine is said ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
The present paper studies the problem of when a team of learning machines can be aggregated into a single learning machine without any loss in learning power. The main results concern aggregation ratios for vacillatory identification of languages from texts. For a positiveinteger n,amachine is said to TxtFex n identify a language L just in case the machine converges to up to n grammars for L on any text for L.For such identification criteria, the aggregation ratio is derived for the n = 2 case. It is shown that the collection of languages that can be TxtFex 2 identified by teams with success ratio greater than 5=6 are the same as those collections of languages that can be TxtFex 2  identified by a single machine. It is also established that 5=6 is indeed the cutoff point by showing that there are collections of languages that can be TxtFex 2 identified bya team employing 6 machines, at least 5 of which are required to be successful, but cannot be TxtFex 2 identified byany single machine. Additionally, aggregation ratios are also derived for finite identification of languages from positive data and for numerous criteria involving language learning from both positive and negative data.