Results 1 
5 of
5
A latent variable model for generative dependency parsing
 In Proceedings of IWPT
, 2007
"... We propose a generative dependency parsing model which uses binary latent variables to induce conditioning features. To define this model we use a recently proposed class of Bayesian Networks for structured prediction, Incremental Sigmoid Belief Networks. We demonstrate that the proposed model achie ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
(Show Context)
We propose a generative dependency parsing model which uses binary latent variables to induce conditioning features. To define this model we use a recently proposed class of Bayesian Networks for structured prediction, Incremental Sigmoid Belief Networks. We demonstrate that the proposed model achieves stateoftheart results on three different languages. We also demonstrate that the features induced by the ISBN’s latent variables are crucial to this success, and show that the proposed model is particularly good on long dependencies. 1
Incremental Bayesian Networks for Structure Prediction
"... We propose a class of graphical models appropriate for structure prediction problems where the model structure is a function of the output structure. Incremental Sigmoid Belief Networks (ISBNs) avoid the need to sum over the possible model structures by using directed arcs and incrementally specifyi ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
We propose a class of graphical models appropriate for structure prediction problems where the model structure is a function of the output structure. Incremental Sigmoid Belief Networks (ISBNs) avoid the need to sum over the possible model structures by using directed arcs and incrementally specifying the model structure. Exact inference in such directed models is not tractable, but we derive two efficient approximations based on mean field methods, which prove effective in artificial experiments. We then demonstrate their effectiveness on a benchmark natural language parsing task, where they achieve stateoftheart accuracy. Also, the model which is a closer approximation to an ISBN has better parsing accuracy, suggesting that ISBNs are an appropriate abstract model of structure prediction tasks. 1.
Incremental sigmoid belief networks for grammar learning
 Journal of Machine Learning Research
, 2010
"... We propose a class of Bayesian networks appropriate for structured prediction problems where the Bayesian network’s model structure is a function of the predicted output structure. These incremental sigmoid belief networks (ISBNs) make decoding possible because inference with partial output structur ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We propose a class of Bayesian networks appropriate for structured prediction problems where the Bayesian network’s model structure is a function of the predicted output structure. These incremental sigmoid belief networks (ISBNs) make decoding possible because inference with partial output structures does not require summing over the unboundedly many compatible model structures, due to their directed edges and incrementally specified model structure. ISBNs are specifically targeted at challenging structured prediction problems such as natural language parsing, where learning the domain’s complex statistical dependencies benefits from large numbers of latent variables. While exact inference in ISBNs with large numbers of latent variables is not tractable, we propose two efficient approximations. First, we demonstrate that a previous neural network parsing model can be viewed as a coarse meanfield approximation to inference with ISBNs. We then derive a more accurate but still tractable variational approximation, which proves effective in artificial experiments. We compare the effectiveness of these models on a benchmark natural language parsing task, where they achieve accuracy competitive with the stateoftheart. The model which is a closer approximation model of natural language grammar learning.
Identification of Multiword Expressions by Combining Multiple Linguistic Information Sources
"... We propose an architecture for expressing various linguisticallymotivated features that help identify multiword expressions in natural language texts. The architecture combines various linguisticallymotivated classification features in a Bayesian Network. We introduce novel ways for computing man ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We propose an architecture for expressing various linguisticallymotivated features that help identify multiword expressions in natural language texts. The architecture combines various linguisticallymotivated classification features in a Bayesian Network. We introduce novel ways for computing many of these features, and manually define linguisticallymotivated interrelationships among them, which the Bayesian network models. Our methodology is almost entirely unsupervised and completely languageindependent; it relies on few language resources and is thus suitable for a large number of languages. Furthermore, unlike much recent work, our approach can identify expressions of various types and syntactic constructions. We demonstrate a significant improvement in identification accuracy, compared with less sophisticated baselines. 1
of dynamic Sigmoid Belief Networks
"... We introduce a framework for syntactic parsing with latent variables based on a form ..."
Abstract
 Add to MetaCart
(Show Context)
We introduce a framework for syntactic parsing with latent variables based on a form