Results 1  10
of
82
Categorial Type Logics
 Handbook of Logic and Language
, 1997
"... Contents 1 Introduction: grammatical reasoning 1 2 Linguistic inference: the Lambek systems 5 2.1 Modelinggrammaticalcomposition ............................ 5 2.2 Gentzen calculus, cut elimination and decidability . . . . . . . . . . . . . . . . . . . . 9 2.3 Discussion: options for resource mana ..."
Abstract

Cited by 291 (6 self)
 Add to MetaCart
Contents 1 Introduction: grammatical reasoning 1 2 Linguistic inference: the Lambek systems 5 2.1 Modelinggrammaticalcomposition ............................ 5 2.2 Gentzen calculus, cut elimination and decidability . . . . . . . . . . . . . . . . . . . . 9 2.3 Discussion: options for resource management . . . . . . . . . . . . . . . . . . . . . . 13 3 The syntaxsemantics interface: proofs and readings 16 3.1 Term assignment for categorial deductions . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2 Natural language interpretation: the deductive view . . . . . . . . . . . . . . . . . . . 21 4 Grammatical composition: multimodal systems 26 4.1 Mixedinference:themodesofcomposition........................ 26 4.2 Grammaticalcomposition:unaryoperations ....................... 30 4.2.1 Unary connectives: logic and structure . . . . . . . . . . . . . . . . . . . . . . . 31 4.2.2 Applications: imposing constraints, structural relaxation
The Acquisition of a UnificationBased Generalised Categorial Grammar
, 2002
"... The purpose of this work is to investigate the process of grammatical acquisition from data. In order to do that, a computational learning system is used, composed of a Universal Grammar with associated parameters, and a learning algorithm, following the Principles and Parameters Theory. The Univers ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
The purpose of this work is to investigate the process of grammatical acquisition from data. In order to do that, a computational learning system is used, composed of a Universal Grammar with associated parameters, and a learning algorithm, following the Principles and Parameters Theory. The Universal Grammar is implemented as a UnificationBased Generalised Categorial Grammar, embedded in a default inheritance network of lexical types. The learning algorithm receives input from a corpus of spontaneous childdirected transcribed speech annotated with logical forms and sets the parameters based on this input. This framework is used as a basis to investigate several aspects of language acquisition. In this thesis I concentrate on the acquisition of subcategorisation frames and word order information, from data. The data to which the learner is exposed can be noisy and ambiguous, and I investigate how these factors affect the learning process. The results obtained show a robust learner converging towards the target grammar given the input data available. They also show how the amount of noise present in the input data affects the speed of convergence of the learner towards the target grammar. Future work is suggested for investigating the developmental stages of language acquisition as predicted by the learning model, with a thorough comparison with the developmental stages of a child. This is primarily a cognitive computational model of language learning that can be used to investigate and gain a better understanding of human language acquisition, and can potentially be relevant to the development of more adaptive NLP technology.
Angluin's Theorem for Indexed Families of R.e. Sets and Applications
, 1996
"... We extend Angluin's (1980) theorem to characterize identifiability of indexed families of r.e. languages, as opposed to indexed families of recursive languages. We also prove some variants characterizing conservativity and two other similar restrictions, paralleling Zeugmann, Lange, and Kapur&a ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We extend Angluin's (1980) theorem to characterize identifiability of indexed families of r.e. languages, as opposed to indexed families of recursive languages. We also prove some variants characterizing conservativity and two other similar restrictions, paralleling Zeugmann, Lange, and Kapur's (1992, 1995) results for indexed families of recursive languages. 1 Introduction A significant portion of the work of recent years in the field of inductive inference of formal languages, as initiated by Gold 1967, stems from Angluin's (1980b) theorem, which characterizes when an indexed family of recursive languages is identifiable in the limit from positive data in the sense of Gold. Up until around 1980, a prevalent view had been that inductive inference from positive data is too weak to be of much theoretical interest. This misconception was due to the negative result in Gold's original paper, which says that any class of languages that contains every finite language and at least one infini...
Unsupervised Lexical Learning with Categorial Grammars
 IN PROCEEDINGS OF THE 1ST WORKSHOP ON LEARNING LANGUAGE IN LOGIC
, 1999
"... In this paper we report on an unsupervised ap proach to learning Categorial Grammar (CG) lexicons. The learner is provided with a set of possible lexical CG categories, the forward and backward application rules of CG and unmarked positive only corpora. Using the categories and rules, the sentences ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
In this paper we report on an unsupervised ap proach to learning Categorial Grammar (CG) lexicons. The learner is provided with a set of possible lexical CG categories, the forward and backward application rules of CG and unmarked positive only corpora. Using the categories and rules, the sentences from the corpus are probabilistically parsed. The parses and the history of previously parsed sentences are used to build a lexicon and annotate the corpus. We report the results from experiments on a number of small generated corpora, that contain examples from subsets of the English language. These show that the system is able to generate reasonable lexicons and provide accurately parsed corpora in the process. We also discuss ways in which the approach can be scaled up to deal with larger and more diverse corpora.
Structural Equations in Language Learning
 Proceedings LACL 2001, Springer Lecture Notes in Artifical Intelligence 2099
, 2001
"... In categorial systems with a fixed structural component, the learning problem comes down to finding the solution for a set of typeassignment equations. A hardwired structural component is problematic if one want to address issues of structural variation. Our starting point is a typelogical arc ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
In categorial systems with a fixed structural component, the learning problem comes down to finding the solution for a set of typeassignment equations. A hardwired structural component is problematic if one want to address issues of structural variation. Our starting point is a typelogical architecture with separate modules for the logical and the structural components of the computational system. The logical component expresses invariants of grammatical composition; the structural component captures variation in the realization of the correspondence between form and meaning. Learning in this setting involves finding the solution to both the typeassignment equations and the structural equations of the language at hand. We develop a view on these two subtasks which pictures learning as a process moving through a twostage cycle.
Itembased constructions and the logical problem
 ACL
, 2005
"... The logical problem of language is grounded on arguments from poverty of positive evidence and arguments from poverty of negative evidence. Careful analysis of child language corpora shows that, if one assumes that children learn through itembased constructions, there is an abundance of positive ev ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
The logical problem of language is grounded on arguments from poverty of positive evidence and arguments from poverty of negative evidence. Careful analysis of child language corpora shows that, if one assumes that children learn through itembased constructions, there is an abundance of positive evidence. Arguments regarding the poverty of negative evidence can also be addressed by the mechanism of conservative itembased learning. When conservativism is abandoned, children can rely on competition, cue construction,
Semantic Bootstrapping of TypeLogical Grammar
 Journal of Logic, Language and Information
, 2002
"... A procedure is described which induces typelogical grammar lexicons from sentences annotated with skeletal terms of the simply typed lambda calculus. A generalized formulaeastypes correspondence is exploited to obtain all the typelogical proofs of the sample sentences from their lambda terms, and ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A procedure is described which induces typelogical grammar lexicons from sentences annotated with skeletal terms of the simply typed lambda calculus. A generalized formulaeastypes correspondence is exploited to obtain all the typelogical proofs of the sample sentences from their lambda terms, and the resulting lexicons are then optimally unified, which e#ectively unifies the syntactic categories of words which have the same syntactic behavior evident in the induced structures. This e#ort extends the earlier induction of such lexicons for classical categorial grammar (Buszkowski and Penn, 1990) to at first the nonassociative Lambek calculus, and then to a large class of type logics enriched by modal operators and structural rules. The motivation for this approach is linguisticwe have implemented a theoretically operational procedure for semantic bootstrapping of natural language syntax, which is the first one in any setting with su#cient scope to meet the demands of descriptively adequate natural language grammars. One of the main points of the enterprise is that the syntactic and semantic categories operating in the language are learned, in direct opposition to more familiar grammar induction procedures which begin with a fixed set of categories and frequently partofspeech tagged data as well. This general approach could be used by linguists to learn something about lexical categories and to develop linguistically insightful complete grammars for large fragments.
The subset principle in syntax: costs of compliance. Linguistics
, 2005
"... draw attention here to some unsolved problems in the application of SP to syntax acquisition. While noting connections to formal results in computational linguistics, our focus is on how SP could be implemented in a way that is both linguistically wellgrounded and psychologically feasible. We conce ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
draw attention here to some unsolved problems in the application of SP to syntax acquisition. While noting connections to formal results in computational linguistics, our focus is on how SP could be implemented in a way that is both linguistically wellgrounded and psychologically feasible. We concentrate on incremental learning (with no memory for past inputs), which is now widely assumed in psycholinguistics. However, in investigating its interactions with SP, we uncover the rather startling fact that incremental learning and SP are incompatible, given other standard assumptions. We set out some ideas for ways in which they might be reconciled. Some seem more promising than others, but all appear to carry severe costs in terms of computational load, learning speed or memory resources. The penalty for disobeying SP has long been understood. In future language acquisition research it will be important to address the costs of obeying SP.