Results 1  10
of
2,647,240
Gradient flows in metric spaces and in the space of probability measures
 LECTURES IN MATHEMATICS ETH ZÜRICH, BIRKHÄUSER VERLAG
, 2005
"... ..."
Probabilistic PartofSpeech Tagging Using Decision Trees
, 1994
"... In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, ..."
Abstract

Cited by 1009 (9 self)
 Add to MetaCart
In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method
Graphical models, exponential families, and variational inference
, 2008
"... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..."
Abstract

Cited by 800 (26 self)
 Add to MetaCart
likelihoods, marginal probabilities and most probable configurations. We describe how a wide varietyof algorithms — among them sumproduct, cluster variational methods, expectationpropagation, mean field methods, maxproduct and linear programming relaxation, as well as conic programming relaxations — can
Bayes Factors
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Abstract

Cited by 1766 (74 self)
 Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null
Markov Random Field Models in Computer Vision
, 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract

Cited by 515 (18 self)
 Add to MetaCart
. A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model
Exceptional Exporter Performance: Cause, Effect or Both
 Journal of International Economics
, 1999
"... A growing body of empirical work has documented the superior performance characteristics of exporting plants and firms relative to nonexporters. Employment, shipments, wages, productivity and capital intensity are all higher at exporters at any given moment. This paper asks whether good firms becom ..."
Abstract

Cited by 685 (19 self)
 Add to MetaCart
A growing body of empirical work has documented the superior performance characteristics of exporting plants and firms relative to nonexporters. Employment, shipments, wages, productivity and capital intensity are all higher at exporters at any given moment. This paper asks whether good firms
Exploiting Generative Models in Discriminative Classifiers
 In Advances in Neural Information Processing Systems 11
, 1998
"... Generative probability models such as hidden Markov models provide a principled way of treating missing information and dealing with variable length sequences. On the other hand, discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often resu ..."
Abstract

Cited by 538 (11 self)
 Add to MetaCart
Generative probability models such as hidden Markov models provide a principled way of treating missing information and dealing with variable length sequences. On the other hand, discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract

Cited by 554 (18 self)
 Add to MetaCart
, capitalization, formatting, partofspeech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We
Studies of transformation of Escherichia coli with plasmids
 J. Mol. Biol
, 1983
"... Factors that affect he probability of genetic transformation f Escherichia coli by plasmids have been evaluated. A set of conditions is described under which about one in every 400 plasmid molecules produces a transformed cell. These conditions include cell growth in medium containing elevated level ..."
Abstract

Cited by 1609 (1 self)
 Add to MetaCart
Factors that affect he probability of genetic transformation f Escherichia coli by plasmids have been evaluated. A set of conditions is described under which about one in every 400 plasmid molecules produces a transformed cell. These conditions include cell growth in medium containing elevated
Results 1  10
of
2,647,240