Results 1  10
of
1,748,949
Maximum Entropy Techniques for Exploiting Syntactic, Semantic and Collocational Dependencies in Language Modeling
"... A new statistical language model is presented which combines collocational dependencies with two important sources of longrange statistical dependence: the syntactic structure and the topic of a sentence. These dependencies or constraints are integrated using the maximum entropy technique. Subs ..."
Abstract

Cited by 57 (11 self)
 Add to MetaCart
A new statistical language model is presented which combines collocational dependencies with two important sources of longrange statistical dependence: the syntactic structure and the topic of a sentence. These dependencies or constraints are integrated using the maximum entropy technique
Extending Maximum Entropy Techniques to Entropy Constraints Gang Xiang
"... Abstract—In many practical situations, we have only partial information about the probabilities. In some cases, we have crisp (interval) bounds on the probabilities and/or on the related statistical characteristics. In other situations, we have fuzzy bounds, i.e., different interval bounds with diff ..."
Abstract
 Add to MetaCart
distribution. Usually, as such a “typical ” distribution, we select the one with the largest value of the entropy. This works perfectly well in usual cases when the information about the distribution consists of the values of moments and other characteristics. For example, if we only know the first and the
Extending Maximum Entropy Techniques to Entropy Constraints Gang Xiang
"... Abstract—In many practical situations, we have only partial information about the probabilities. In some cases, we have crisp (interval) bounds on the probabilities and/or on the related statistical characteristics. In other situations, we have fuzzy bounds, i.e., different interval bounds with diff ..."
Abstract
 Add to MetaCart
distribution. Usually, as such a “typical ” distribution, we select the one with the largest value of the entropy. This works perfectly well in usual cases when the information about the distribution consists of the values of moments and other characteristics. For example, if we only know the first and the
An Efficient Maximum Entropy Technique for 2D Isotropic Random Fields
"... AbstractIn this paper, we present a new linear MEM algorithm for 2D isotropic random fields. Unlike general 2D covariances, isotropic covariance functions which are positive definite on a disk are known to be extendible. Here, we develop a computationally efficient procedure for computing the MEM ..."
Abstract
 Add to MetaCart
AbstractIn this paper, we present a new linear MEM algorithm for 2D isotropic random fields. Unlike general 2D covariances, isotropic covariance functions which are positive definite on a disk are known to be extendible. Here, we develop a computationally efficient procedure for computing the MEM isotropic spectral estimate corresponding to an isotropic covariance function which is given over a finite disk of radius 2R. We show that the isotropic MEM problem has a linear solution and that it is equivalent to the problem of constructing the optimal linear filter for estimating the underlying isotropic field at a point on the boundary of a disk of radius R given noisy measurements of the field inside the disk. The spectral estimation procedure described in this paper is guaranteed to yield a valid isotropic spectral estimate and is computationally efficient since it requires only O(BRLz) operations, where L is the number of points used to discretize the interval [O, R], and where B is the bandwidth in the wavenumber plane of the spectrum that we want to estimate. Examples are also presented to illustrate the behavior of the new algorithm and its high resolution property. I.
A MaximumEntropyInspired Parser
, 1999
"... We present a new parser for parsing down to Penn treebank style parse trees that achieves 90.1% average precision/recall for sentences of length 40 and less, and 89.5% for sentences of length 100 and less when trained and tested on the previously established [5,9,10,15,17] "stan dard" se ..."
Abstract

Cited by 963 (19 self)
 Add to MetaCart
" sections of the Wall Street Journal tree bank. This represents a 13% decrease in error rate over the best singleparser results on this corpus [9]. The major technical innova tion is the use of a "maximumentropyinspired" model for conditioning and smoothing that let us successfully to test
A Maximum Entropy approach to Natural Language Processing
 COMPUTATIONAL LINGUISTICS
, 1996
"... The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper we des ..."
Abstract

Cited by 1341 (5 self)
 Add to MetaCart
The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper we
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract

Cited by 554 (18 self)
 Add to MetaCart
, capitalization, formatting, partofspeech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We
Using Maximum Entropy for Text Classification
, 1999
"... This paper proposes the use of maximum entropy techniques for text classification. Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, partofspeech tagging, and text segmentation. The underlying principl ..."
Abstract

Cited by 320 (6 self)
 Add to MetaCart
This paper proposes the use of maximum entropy techniques for text classification. Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, partofspeech tagging, and text segmentation. The underlying
A Maximum Entropy Model for PartOfSpeech Tagging
, 1996
"... This paper presents a statistical model which trains from a corpus annotated with PartOfSpeech tags and assigns them to previously unseen text with stateoftheart accuracy(96.6%). The model can be classified as a Maximum Entropy model and simultaneously uses many contextual "features" t ..."
Abstract

Cited by 577 (1 self)
 Add to MetaCart
This paper presents a statistical model which trains from a corpus annotated with PartOfSpeech tags and assigns them to previously unseen text with stateoftheart accuracy(96.6%). The model can be classified as a Maximum Entropy model and simultaneously uses many contextual "
Discriminative Training and Maximum Entropy Models for Statistical Machine Translation
, 2002
"... We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language senten ..."
Abstract

Cited by 497 (30 self)
 Add to MetaCart
We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language
Results 1  10
of
1,748,949