Results 1 
7 of
7
Language Learning With Some Negative Information
, 1993
"... Gold–style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
Gold–style language learning is a formal theory of learning from examples by algorithmic devices called learning machines. Originally motivated by child language learning, it features the algorithmic synthesis (in the limit) of grammars for formal languages from information about those languages. In traditional Gold–style language learning, learning machines are not provided with negative information, i.e., information about the complements of the input languages. We investigate two approaches to providing small amounts of negative information and demonstrate in each case a strong resulting increase in learning power. Finally, we show that small packets of negative information also lead to increased speed of learning. This result agrees with a psycholinguistic hypothesis of McNeill correlating the availability of parental expansions with the speed of child language development.
Elementary formal systems, intrinsic complexity, and procrastination
 Information and Computation
, 1997
"... Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data. Examples of these classes are Angluin’s pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by lengthbounded e ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data. Examples of these classes are Angluin’s pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by lengthbounded elementary formal systems studied by Shinohara. The present paper employs two distinct bodies of abstract studies in the inductive inference literature to analyze the learnability of these concrete classes. The first approach, introduced by Freivalds and Smith, uses constructive ordinals to bound the number of mind changes. ω denotes the first limit ordinal. An ordinal mind change bound of ω means that identification can be carried out by a learner that after examining some element(s) of the language announces an upper bound on the number of mind changes it will make before converging; a bound of ω · 2 means that the learner reserves the right to revise this upper bound once; a bound of ω · 3 means the learner reserves the right to revise this upper bound twice, and so on. A bound of ω 2 means that identification can be carried out by a learner that announces an upper bound on the number of times it may revise its conjectured upper bound on the number of mind changes. It is shown in the present paper that the ordinal mind change complexity for identification of languages formed by unions of up to n pattern languages is ω n. It is
Learning Elementary Formal Systems with Queries
, 2000
"... An elementary formal system (EFS , for short) is a kind of logic program which directly manipulates character strings. A number of researches have investigated the ability of EFS as an uniform framework for language learning in various learning models including model inference, inductive inferenc ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
An elementary formal system (EFS , for short) is a kind of logic program which directly manipulates character strings. A number of researches have investigated the ability of EFS as an uniform framework for language learning in various learning models including model inference, inductive inference, and PAClearning. In this paper, we investigate the polynomial time learnability of EFS from the view of active learning allowing membership queries. Positive results include the polynomial time learnability of the class of terminating HEFS of variableoccurrence k and arity r from equivalence queries and entailment membership queries with the information on termination. We also presented a lower bound result showing that the algorithm is near optimal in the query complexity. Negative results include a series of representationindependent hardness results, which fill the gap between the learnable and the nonlearnable subclasses of EFS in our knowledge. Particularly, we showed th...
On the Complexity of Consistent Identification of some Classes of Structure Languages (Extended Abstract)
, 2000
"... In [5, 7] `discovery procedures' for CCGs were defined that accept a sequence of structures as input and yield a set of grammars... ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In [5, 7] `discovery procedures' for CCGs were defined that accept a sequence of structures as input and yield a set of grammars...
Learning Generalized Quantifiers
, 2002
"... This paper addresses the question of the learnability of generalized quantifiers. This topic was first taken up in (van Benthem 1986a) but has received little attention since then. There are a few results: in (Clark 1996) it was shown that firstorder generalized quantifiers are learnable with membe ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper addresses the question of the learnability of generalized quantifiers. This topic was first taken up in (van Benthem 1986a) but has received little attention since then. There are a few results: in (Clark 1996) it was shown that firstorder generalized quantifiers are learnable with membership queries. In (Tiede 1999) it was shown, among other things, that the left upward monotone quantifiers are learnable from positive data. Applying results from the field of formal learning theory the results from (Tiede 1999) will be strengthened: it is shown that these classes are learnable under psychologically plausible restrictions on the learner.
The Learnability of Recursive Languages in Dependence on the Hypothesis Space
 HTWK LEIPZIG, FB MATHEMATIK, INFORMATIK UND NATURWISSENSCHAFTEN, GOSLERREPORT
, 1993
"... We study the learnability of indexed families L = (L j ) j2IN of uniformly recursive languages under certain monotonicity constraints. Thereby we distinguish between exact learnability (L has to be learnt with respect to the space L of hypotheses), class preserving learning (L has to be inferred wi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We study the learnability of indexed families L = (L j ) j2IN of uniformly recursive languages under certain monotonicity constraints. Thereby we distinguish between exact learnability (L has to be learnt with respect to the space L of hypotheses), class preserving learning (L has to be inferred with respect to some space G of hypotheses having the same range as L), and class comprising inference (L has to be learnt with respect to some space G of hypotheses that has a range comprising range(L)). In particular, it is proved that, whenever monotonicity requirements are involved, then exact learning is almost always weaker than class preserving inference which itself turns out to be almost always weaker than class comprising learning. Next, we provide additionally insight into the problem under what conditions, for example, exact and class preserving learning procedures are of equal power. Finally, we deal with the question what kind of languages has to be added to the space of hypothe...
NEGATIVE DATA IN LEARNING LANGUAGES
, 2007
"... The paper is a survey of recent results on algorithmic learning (inductive inference) of languages from full collection of positive examples and some negative data. Different types of negative data are considered. We primarily concentrate on learning using (1) carefully chosen finite negative data ( ..."
Abstract
 Add to MetaCart
The paper is a survey of recent results on algorithmic learning (inductive inference) of languages from full collection of positive examples and some negative data. Different types of negative data are considered. We primarily concentrate on learning using (1) carefully chosen finite negative data (2) negative counterexamples provided when conjectures contain data not in the target language (3) negative counterexamples obtained from a teacher (formally, oracle), when a learner queries the oracle if an hypothesis is contained in the target language. We also explore how least counterexamples and counterexamples of bounded size fair against arbitrary counterexamples. The effects of random negative data are also briefly considered.