Results 1  10
of
50
Convolution Kernels for Natural Language
 Advances in Neural Information Processing Systems 14
, 2001
"... We describe the application of kernel methods to Natural Language Processing (NLP) problems. In many NLP tasks the objects being modeled are strings, trees, graphs or other discrete structures which require some mechanism to convert them into feature vectors. We describe kernels for various natural ..."
Abstract

Cited by 255 (7 self)
 Add to MetaCart
We describe the application of kernel methods to Natural Language Processing (NLP) problems. In many NLP tasks the objects being modeled are strings, trees, graphs or other discrete structures which require some mechanism to convert them into feature vectors. We describe kernels for various natural language structures, allowing rich, high dimensional representations of these structures. We show how a kernel over trees can be applied to parsing using the voted perceptron algorithm, and we give experimental results on the ATIS corpus of parse trees.
New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron
, 2002
"... This paper introduces new learning algorithms for natural language processing based on the perceptron algorithm. We show how the algorithms can be efficiently applied to exponential sized representations of parse trees, such as the "all subtrees" (DOP) representation described by (Bod 98), or a r ..."
Abstract

Cited by 214 (6 self)
 Add to MetaCart
This paper introduces new learning algorithms for natural language processing based on the perceptron algorithm. We show how the algorithms can be efficiently applied to exponential sized representations of parse trees, such as the "all subtrees" (DOP) representation described by (Bod 98), or a representation tracking all subfragments of a tagged sentence. We give experimental results showing significant improvements on two tasks: parsing Wall Street Journal text, and namedentity extraction from web data.
Parsing Algorithms and Metrics
 IN PROCEEDINGS OF THE 34TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
, 1996
"... Many different metrics exist for evaluating parsing results, including Viterbi, Crossing Brackets Rate, Zero Crossing Brackets Rate, and several others. However, most parsing algorithms, including the Viterbi algorithm, attempt to optimize the same metric, namely the probability of getting th ..."
Abstract

Cited by 92 (5 self)
 Add to MetaCart
Many different metrics exist for evaluating parsing results, including Viterbi, Crossing Brackets Rate, Zero Crossing Brackets Rate, and several others. However, most parsing algorithms, including the Viterbi algorithm, attempt to optimize the same metric, namely the probability of getting the correct labelled tree. By choosing a parsing algorithm appropriate for the evaluation metric, better performance can be achieved. We present two new algorithms: the "Labelled Recall Algorithm," which maximizes the expected Labelled Recall Rate, and the "Bracketed Recall Algorithm," which maximizes the Bracketed Recall Rate. Experimental results are given, showing that the two new algorithms have improved performance over the Viterbi algorithm on many criteria, especially the ones that they optimize.
Statistical Techniques for Natural Language Parsing
 AI Magazine
, 1997
"... We review current statistical work on syntactic parsing and then consider partofspeech tagging, which was the first syntactic problem to be successfully attacked by statistical techniques and also serves as a good warmup for the main topic, statistical parsing. Here we consider both the simplif ..."
Abstract

Cited by 90 (1 self)
 Add to MetaCart
We review current statistical work on syntactic parsing and then consider partofspeech tagging, which was the first syntactic problem to be successfully attacked by statistical techniques and also serves as a good warmup for the main topic, statistical parsing. Here we consider both the simplified case in which the input string is viewed as a string of parts of speech, and the more interesting case in which the parser is guided by statistical information about the particular words in the sentence. Finally we anticipate future research directions. 1 Introduction Syntactic parsing is the process of assigning a "phrase marker" to a sentence  that is, the process that given a sentence like "The dog ate," produces a structure like that in Figure 1. In this example we adopt the standard abbreviations: np for "noun phrase," vp for "verb phrase," and det for "determiner." It is generally accepted that finding the sort of structure shown in Figure 1 is useful in determining the m...
Parsing InsideOut
, 1998
"... Probabilistic ContextFree Grammars (PCFGs) and variations on them have recently become some of the most common formalisms for parsing. It is common with PCFGs to compute the inside and outside probabilities. When these probabilities are multiplied together and normalized, they produce the probabili ..."
Abstract

Cited by 83 (2 self)
 Add to MetaCart
Probabilistic ContextFree Grammars (PCFGs) and variations on them have recently become some of the most common formalisms for parsing. It is common with PCFGs to compute the inside and outside probabilities. When these probabilities are multiplied together and normalized, they produce the probability that any given nonterminal covers any piece of the input sentence. The traditional use of these probabilities is to improve the probabilities of grammar rules. In this thesis we show that these values are useful for solving many other problems in Statistical Natural Language Processing. We give a framework for describing parsers. The framework generalizes the inside and outside values to semirings. It makes it easy to describe parsers that compute a wide variety of interesting quantities, including the inside and outside probabilities, as well as related quantities such as Viterbi probabilities and nbest lists. We also present three novel uses for the inside and outside probabilities. T...
Probabilistic CFG with Latent Annotations
, 2005
"... This paper defines a generative probabilistic model of parse trees, which we call PCFGLA. This model is an extension of PCFG in which nonterminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFGLA model using an E ..."
Abstract

Cited by 68 (1 self)
 Add to MetaCart
This paper defines a generative probabilistic model of parse trees, which we call PCFGLA. This model is an extension of PCFG in which nonterminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFGLA model using an EMalgorithm. Because exact parsing with a PCFGLA is NPhard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6 % (F ¥ , sentences ¦ 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.
Semiring Parsing
 Computational Linguistics
, 1999
"... this paper is that all five of these commonly computed quantities can be described as elements of complete semirings (Kuich 1997). The relationship between grammars and semirings was discovered by Chomsky and Schtitzenberger (1963), and for parsing with the CKY algorithm, dates back to Teitelbaum ( ..."
Abstract

Cited by 64 (1 self)
 Add to MetaCart
this paper is that all five of these commonly computed quantities can be described as elements of complete semirings (Kuich 1997). The relationship between grammars and semirings was discovered by Chomsky and Schtitzenberger (1963), and for parsing with the CKY algorithm, dates back to Teitelbaum (1973). A complete semiring is a set of values over which a multiplicative operator and a commutative additive operator have been defined, and for which infinite summations are defined. For parsing algorithms satisfying certain conditions, the multiplicative and additive operations of any complete semiring can be used in place of/x and , and correct values will be returned. We will give a simple normal form for describing parsers, then precisely define complete semirings, and the conditions for correctness
A probabilistic corpusdriven model for lexicalfunctional analysis
 Proceedings COLINGACL'98
, 1998
"... rens.bod @ let.uva.nl Wc develop a l)ataOricntcd Parsing (DOP) model based on the syntactic representations of Lexicalf;unctional Grammar (LFG). We start by summarizing the original DOP model for tree representations and then show how it can be extended with corresponding functional structures. ..."
Abstract

Cited by 59 (16 self)
 Add to MetaCart
rens.bod @ let.uva.nl Wc develop a l)ataOricntcd Parsing (DOP) model based on the syntactic representations of Lexicalf;unctional Grammar (LFG). We start by summarizing the original DOP model for tree representations and then show how it can be extended with corresponding functional structures. The resulting LFGDOP model triggers a new, corpusbased notion of grammaticality, and its probability models exhibit interesting behavior with respect to specificity and the interpretation of illformed strings. 1.
A DOP Model for Semantic Interpretation
 Proceedings ACL/EACL97
, 1997
"... In dataoriented language processing, an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new sentence is constructed by combining fragments from the corpus in the most probable way. This approach has been successfully used for syntactic analysis, usi ..."
Abstract

Cited by 37 (14 self)
 Add to MetaCart
In dataoriented language processing, an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new sentence is constructed by combining fragments from the corpus in the most probable way. This approach has been successfully used for syntactic analysis, using corpora with syntactic annota tions such as the Penn Treebank. If a cor pus with semantically annotated sentences is used, the same approach can also gen erate the most probable semantic interpretation of an input sentence. The present paper explains this semantic interpretation method. A dataoriented semantic inter pretation algorithm was tested on two semantically annotated corpora: the English ATIS corpus and the Dutch OVIS corpus.