Results 1 - 10
of
49
Vector-based models of semantic composition
- In Proceedings of ACL-08: HLT
, 2008
"... This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which ..."
Abstract
-
Cited by 220 (5 self)
- Add to MetaCart
This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.
Composition in distributional models of semantics
, 2010
"... Distributional models of semantics have proven themselves invaluable both in cog-nitive modelling of semantic phenomena and also in practical applications. For ex-ample, they have been used to model judgments of semantic similarity (McDonald, 2000) and association (Denhire and Lemaire, 2004; Griffit ..."
Abstract
-
Cited by 148 (3 self)
- Add to MetaCart
(Show Context)
Distributional models of semantics have proven themselves invaluable both in cog-nitive modelling of semantic phenomena and also in practical applications. For ex-ample, they have been used to model judgments of semantic similarity (McDonald, 2000) and association (Denhire and Lemaire, 2004; Griffiths et al., 2007) and have been shown to achieve human level performance on synonymy tests (Landuaer and Dumais, 1997; Griffiths et al., 2007) such as those included in the Test of English as Foreign Language (TOEFL). This ability has been put to practical use in automatic the-saurus extraction (Grefenstette, 1994). However, while there has been a considerable amount of research directed at the most effective ways of constructing representations for individual words, the representation of larger constructions, e.g., phrases and sen-tences, has received relatively little attention. In this thesis we examine this issue of how to compose meanings within distributional models of semantics to form represen-tations of multi-word structures. Natural language data typically consists of such complex structures, rather than
Representing word meaning and order information in a composite holographic lexicon
- Psychological Review
, 2007
"... The authors present a computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language. The model uses simple convolution and superposition mechanisms (cf. B. B. Murdock, 1982) to learn distributed holographic repr ..."
Abstract
-
Cited by 123 (14 self)
- Add to MetaCart
The authors present a computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language. The model uses simple convolution and superposition mechanisms (cf. B. B. Murdock, 1982) to learn distributed holographic representations for words. The structure of the resulting lexicon can account for empirical data from classic experiments studying semantic typicality, categorization, priming, and semantic constraint in sentence completions. Furthermore, order information can be retrieved from the holographic representations, allowing the model to account for limited word transitions without the need for built-in transition rules. The model demonstrates that a broad range of psychological data can be accounted for directly from the structure of lexical representations learned in this way, without the need for complexity to be built into either the processing mechanisms or the representations. The holographic representations are an appropriate knowledge representation to be used by higher order models of language comprehension, relieving the complexity required at the higher level.
How Latent is Latent Semantic Analysis
- In Proceedings of the Sixteenth International Joint Congress on Artificial Intelligence
, 1999
"... Latent Semantic Analysis (LSA) is a statistical, corpus-based text comparison mechanism that was originally developed for the task of information retrieval, but in recent years has produced remarkably human-like abilities in a variety of language tasks. LSA has taken the Test of English as a Foreign ..."
Abstract
-
Cited by 35 (5 self)
- Add to MetaCart
Latent Semantic Analysis (LSA) is a statistical, corpus-based text comparison mechanism that was originally developed for the task of information retrieval, but in recent years has produced remarkably human-like abilities in a variety of language tasks. LSA has taken the Test of English as a Foreign Language and performed as well as non-native English speakers who were successful college applicants. It has shown an ability to learn words at a rate similar to humans. It has even graded papers as reliably as human graders. We have used LSA as a mechanism for evaluating the quality of student responses in an intelligent tutoring system, and its performance equals that of human raters with intermediate domain knowledge. It has been claimed that LSA’s text-comparison abilities stem primarily from its use of a statistical technique called singular value decomposition (SVD) which compresses a large amount of term and document co-occurrence information into a smaller space. This compression is said to capture the semantic information that is latent in the corpus itself. We test this claim by comparing LSA to a version of LSA without
Interpretation-based processing: a unified theory of semantic sentence comprehension
- Cognitive Science
, 2004
"... We present interpretation-based processing—a theory of sentence processing that builds a syntactic and a semantic representation for a sentence and assigns an interpretation to the sentence as soon as possible. That interpretation can further participate in comprehension and in lexical processing an ..."
Abstract
-
Cited by 27 (3 self)
- Add to MetaCart
(Show Context)
We present interpretation-based processing—a theory of sentence processing that builds a syntactic and a semantic representation for a sentence and assigns an interpretation to the sentence as soon as possible. That interpretation can further participate in comprehension and in lexical processing and is vital for relating the sentence to the prior discourse. Our theory offers a unified account of the processing of literal sentences, metaphoric sentences, and sentences containing semantic illusions. It also explains how text can prime lexical access. We show that word literality is a matter of degree and that the speed and quality of comprehension depend both on how similar words are to their antecedents in the preceding text and how salient the sentence is with respect to the preceding text. Interpretation-based processing also reconciles superficially contradictory findings about the difference in processing times for metaphors and literals. The theory has been implemented in ACT-R [Anderson and Lebiere, The
A hybrid language understanding approach for robust selection of tutoring goals
- In Proceedings of the Intelligent Tutoring Systems Conference
, 2002
"... Abstract. In this paper, we explore the problem of selecting appropriate interventions for students based on an analysis of their interactions with a tutoring system. In the context of the WHY2 conceptual physics tutoring system, we describe CarmelTC, a hybrid symbolic/statistical approach for analy ..."
Abstract
-
Cited by 27 (18 self)
- Add to MetaCart
(Show Context)
Abstract. In this paper, we explore the problem of selecting appropriate interventions for students based on an analysis of their interactions with a tutoring system. In the context of the WHY2 conceptual physics tutoring system, we describe CarmelTC, a hybrid symbolic/statistical approach for analysing conceptual physics explanations in order to determine which Knowledge Construction Dialogues (KCDs) students need for the purpose of encouraging them to include important points that are missing. We briefly describe our tutoring approach. We then present a model that demonstrates a general problem with selecting interventions based on an analysis of student performance in circumstances where there is uncertainty with the interpretation, such as with speech or text based natural language input, complex and error prone mathematical or other formal language input, graphical input (i.e., diagrams, etc.), or gestures. In particular, when student performance completeness is high, intervention selection accuracy is more sensitive to analysis accuracy, and increasingly so as performance completeness increases. In light of this model, we have evaluated our CarmelTC approach and have demonstrated that it performs favourably in comparison with the widely used LSA approach, a Naive Bayes approach, and finally a purely symbolic approach.
Semantic relevance and semantic disorders
- Journal of Cognitive Neuroscience
"... & Semantic features are of different importance in concept representation. The concept elephant may be more easily identified from the feature <trunk> than from the feature <four legs>. We propose a new model of semantic memory to measure the relevance of semantic features for a conc ..."
Abstract
-
Cited by 16 (2 self)
- Add to MetaCart
(Show Context)
& Semantic features are of different importance in concept representation. The concept elephant may be more easily identified from the feature <trunk> than from the feature <four legs>. We propose a new model of semantic memory to measure the relevance of semantic features for a concept and use this model to investigate the controversial issue of category specificity. Category-specific patients have an impairment in one domain of knowledge (e.g., living), whereas the other domain (e.g., nonliving) is relatively spared. We show that categories differ in the level of relevance and that, when concepts belonging to living and nonliving categories are equated to this parameter, the category-specific disorder disappears. Our findings suggest that category specificity, as well as other semantic-related effects, may be explained by a semantic memory model in which concepts are represented by semantic features with associated relevance values. &
Domain and function: A dual-space model of semantic relations and compositions
- Journal of Artificial Intelligence Research
"... Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the ..."
Abstract
-
Cited by 12 (1 self)
- Add to MetaCart
(Show Context)
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics. 1.
Separating Disambiguation from Composition in Distributional Semantics
"... Most compositional-distributional models of meaning are based on ambiguous vector representations, where all the senses of a word are fused into the same vector. This paper provides evidence that the addition of a vector disambiguation step prior to the actual composition would be beneficial to the ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
Most compositional-distributional models of meaning are based on ambiguous vector representations, where all the senses of a word are fused into the same vector. This paper provides evidence that the addition of a vector disambiguation step prior to the actual composition would be beneficial to the whole process, producing better composite representations. Furthermore, we relate this issue with the current evaluation practice, showing that disambiguation-based tasks cannot reliably assess the quality of composition. Using a word sense disambiguation scheme based on the generic procedure of Schütze (1998), we first provide a proof of concept for the necessity of separating disambiguation from composition. Then we demonstrate the benefits of an “unambiguous” system on a composition-only task. 1
Metaphor comprehension: What makes a metaphor difficult to understand?
- METAPHOR AND SYMBOL
, 2002
"... Comprehension difficulty was rated for metaphors of the form Noun1 -is-a Noun2; in addition, participants completed frames of the form Noun1 -is-________ with their literal interpretation of the metaphor. Metaphor comprehension was simulated with a computational model based on Latent Semantic Analys ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
Comprehension difficulty was rated for metaphors of the form Noun1 -is-a Noun2; in addition, participants completed frames of the form Noun1 -is-________ with their literal interpretation of the metaphor. Metaphor comprehension was simulated with a computational model based on Latent Semantic Analysis. The model matched participants' interpretations for both easy and difficult metaphors. When interpreting easy metaphors, both the participants and the model generated highly consistent responses. When interpreting difficult metaphors, both the participants and the model generated disparate responses.