Results 1  10
of
19
Angluin's Theorem for Indexed Families of R.e. Sets and Applications
, 1996
"... We extend Angluin's (1980) theorem to characterize identifiability of indexed families of r.e. languages, as opposed to indexed families of recursive languages. We also prove some variants characterizing conservativity and two other similar restrictions, paralleling Zeugmann, Lange, and Kapur&a ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We extend Angluin's (1980) theorem to characterize identifiability of indexed families of r.e. languages, as opposed to indexed families of recursive languages. We also prove some variants characterizing conservativity and two other similar restrictions, paralleling Zeugmann, Lange, and Kapur's (1992, 1995) results for indexed families of recursive languages. 1 Introduction A significant portion of the work of recent years in the field of inductive inference of formal languages, as initiated by Gold 1967, stems from Angluin's (1980b) theorem, which characterizes when an indexed family of recursive languages is identifiable in the limit from positive data in the sense of Gold. Up until around 1980, a prevalent view had been that inductive inference from positive data is too weak to be of much theoretical interest. This misconception was due to the negative result in Gold's original paper, which says that any class of languages that contains every finite language and at least one infini...
Elementary formal systems, intrinsic complexity, and procrastination
 Information and Computation
, 1997
"... Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data. Examples of these classes are Angluin’s pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by lengthbounded e ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
(Show Context)
Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data. Examples of these classes are Angluin’s pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by lengthbounded elementary formal systems studied by Shinohara. The present paper employs two distinct bodies of abstract studies in the inductive inference literature to analyze the learnability of these concrete classes. The first approach, introduced by Freivalds and Smith, uses constructive ordinals to bound the number of mind changes. ω denotes the first limit ordinal. An ordinal mind change bound of ω means that identification can be carried out by a learner that after examining some element(s) of the language announces an upper bound on the number of mind changes it will make before converging; a bound of ω · 2 means that the learner reserves the right to revise this upper bound once; a bound of ω · 3 means the learner reserves the right to revise this upper bound twice, and so on. A bound of ω 2 means that identification can be carried out by a learner that announces an upper bound on the number of times it may revise its conjectured upper bound on the number of mind changes. It is shown in the present paper that the ordinal mind change complexity for identification of languages formed by unions of up to n pattern languages is ω n. It is
Inferring Pure ContextFree Languages from Positive Data
 ACTA CYBERNETICA
, 1997
"... We study the possibilities to infer pure contextfree langauges from positive data. We can show that while the whole class of pure contextfree languages is not inferable from positive data, it has interesting subclasses which have the desired inference property. We study uniform pure languages, i.e ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We study the possibilities to infer pure contextfree langauges from positive data. We can show that while the whole class of pure contextfree languages is not inferable from positive data, it has interesting subclasses which have the desired inference property. We study uniform pure languages, i.e., languages generated by pure grammars obeying restrictions on the length of the right hand sides of their productions, and pure languages generated by deterministic pure grammars.
On Approximately Identifying Concept Classes in the Limit
 in: Proc. 6th Internat. Workshop on Algorithmic Learning Theory, LNAI 997 (SpringerVerlag
, 1995
"... . In this paper, we introduce various kinds of approximations of a concept and propose a framework of approximate learning in case that a target concept could be outside the hypothesis space. We present some characterization theorems for approximately identifiability. In particular, we show a remark ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
. In this paper, we introduce various kinds of approximations of a concept and propose a framework of approximate learning in case that a target concept could be outside the hypothesis space. We present some characterization theorems for approximately identifiability. In particular, we show a remarkable result that the upperbest approximate identifiability from complete data is collapsed into the upperbest approximate identifiability from positive data. Further, some other characterizations for approximate identifiability from positive data are presented, where we establish a relationship between approximate identifiability and some important notions in quasiorder theory and topology theory. The results obtained in this paper are essentially related to the closure property of concept classes under infinite intersections (or infinite unions). We also show that there exist some interesting example concept classes with such properties ( including specialized EFS's ) by which an upper...
Extensional Set Learning
 Proceedings of The Twelfth Annual Conference on Computational Learning Theory (COLT '99
, 2000
"... We investigate the model recBC of learning of r.e. sets, where changes in hypotheses only count when there is an extensional difference. We study the learnability of collections that are uniformly r.e. We prove that, in contrast with the case of uniformly recursive collections, identifiability d ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We investigate the model recBC of learning of r.e. sets, where changes in hypotheses only count when there is an extensional difference. We study the learnability of collections that are uniformly r.e. We prove that, in contrast with the case of uniformly recursive collections, identifiability does not imply recursive BCidentifiability. This answers a question of D. de Jongh. In contrast to the model of recursive identifiability, we prove that the BCmodel separates the notions of finite thickness and finite elasticity. 1 Introduction In this paper we consider a model of learning where two hypotheses about the data under consideration are considered equal when they denote the same object, i.e. when they are extensionally the same. This model was first defined for identification of functions in Feldman [6], Barzdin [3]. The first reference for this model in the context of set learning (learning from text) seems to be Osherson and Weinstein [14]. The model, and similar ones, ha...
Mind change efficient learning
 Info. & Comp
, 2005
"... Abstract. This paper studies efficient learning with respect to mind changes. Our starting point is the idea that a learner that is efficient with respect to mind changes minimizes mind changes not only globally in the entire learning problem, but also locally in subproblems after receiving some evi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper studies efficient learning with respect to mind changes. Our starting point is the idea that a learner that is efficient with respect to mind changes minimizes mind changes not only globally in the entire learning problem, but also locally in subproblems after receiving some evidence. Formalizing this idea leads to the notion of uniform mind change optimality. We characterize the structure of language classes that can be identified with at most α mind changes by some learner (not necessarily effective): A language class L is identifiable with α mind changes iff the accumulation order of L is at most α. Accumulation order is a classic concept from pointset topology. To aid the construction of learning algorithms, we show that the characteristic property of uniformly mind change optimal learners is that they output conjectures (languages) with maximal accumulation order. We illustrate the theory by describing mind change optimal learners for various problems such as identifying linear subspaces and onevariable patterns. 1
Planar languages and learnability
 IN INTERNATIONAL COLLOQUIUM ON GRAMMATICAL INFERENCE (ICGI
, 2006
"... Strings can be mapped into Hilbert spaces using feature maps such as the Parikh map. Languages can then be defined as the preimage of hyperplanes in the feature space, rather than using grammars or automata. These are the planar languages. In this paper we show that using techniques from kernelbas ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Strings can be mapped into Hilbert spaces using feature maps such as the Parikh map. Languages can then be defined as the preimage of hyperplanes in the feature space, rather than using grammars or automata. These are the planar languages. In this paper we show that using techniques from kernelbased learning, we can represent and efficiently learn, from positive data alone, various linguistically interesting contextsensitive languages. In particular we show that the crossserial dependencies in Swiss German, that established the noncontextfreeness of natural language, are learnable using a standard kernel. We demonstrate the polynomialtime identifiability in the limit of these classes, and discuss some language theoretic properties of these classes, and their relationship to the choice of kernel/feature map.
kValued NonAssociative Lambek Grammars Are Learnable From FunctionArgument Structures
 ELECTRONIC NOTES IN THEORETICAL COMPUTER SCIENCE
, 2003
"... This paper is concerned with learning categorial grammars in the model of Gold. We show that rigid and kvalued nonassociative Lambek grammars are learnable from functionargument structured sentences. In fact, functionargument structures are natural syntactical decompositions of sentences in sub ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This paper is concerned with learning categorial grammars in the model of Gold. We show that rigid and kvalued nonassociative Lambek grammars are learnable from functionargument structured sentences. In fact, functionargument structures are natural syntactical decompositions of sentences in subcomponents with the indication of the head of each subcomponent. This result is
Learning Approaches to Wrapper Induction
 IN PROC. 14TH INTERNATIONAL FLORIDA AI RESEARCH SYMPOSIUM CONFERENCE
, 2001
"... The number, the size, and the dynamics of Internet information sources bears abundant evidence of the need of automation in information extraction (IE). This paper deals with the question of how such extraction mechanisms can automatically be created by invoking learning techniques. The ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The number, the size, and the dynamics of Internet information sources bears abundant evidence of the need of automation in information extraction (IE). This paper deals with the question of how such extraction mechanisms can automatically be created by invoking learning techniques. The
kValued Link Grammars are Learnable from Strings
, 2003
"... The article is concerned with learning link grammars in the model of Gold. We show that rigid and kvalued link grammars are learnable from strings. In fact, we prove that the languages of link structured lists of words associated to rigid link grammars have finite elasticity and we show a learni ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The article is concerned with learning link grammars in the model of Gold. We show that rigid and kvalued link grammars are learnable from strings. In fact, we prove that the languages of link structured lists of words associated to rigid link grammars have finite elasticity and we show a learning algorithm. As a standard corollary, this result leads to the learnability of rigid or kvalued link grammars learned from strings.