Results 1 
7 of
7
Alternating projections for learning with expectation constraints
 In Proc. UAI
, 2009
"... We present an objective function for learning with unlabeled data that utilizes auxiliary expectation constraints. We optimize this objective function using a procedure that alternates between information and moment projections. Our method provides an alternate interpretation of the posterior regula ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
We present an objective function for learning with unlabeled data that utilizes auxiliary expectation constraints. We optimize this objective function using a procedure that alternates between information and moment projections. Our method provides an alternate interpretation of the posterior regularization framework (Graca et al., 2008), maintains uncertainty during optimization unlike constraintdriven learning (Chang et al., 2007), and is more efficient than generalized expectation criteria (Mann & McCallum, 2008). Applications of this framework include minimally supervised learning, semisupervised learning, and learning with constraints that are more expressive than the underlying model. In experiments, we demonstrate comparable accuracy to generalized expectation criteria for minimally supervised learning, and use expressive structural constraints to guide semisupervised learning, providing a 3%6 % improvement over stateoftheart constraintdriven learning. 1
HighPerformance SemiSupervised Learning using Discriminatively Constrained Generative Models
"... We develop a semisupervised learning method that constrains the posterior distribution of latent variables under a generative model to satisfy a rich set of feature expectation constraints estimated with labeled data. This approach encourages the generative model to discover latent structure that i ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We develop a semisupervised learning method that constrains the posterior distribution of latent variables under a generative model to satisfy a rich set of feature expectation constraints estimated with labeled data. This approach encourages the generative model to discover latent structure that is relevant to a prediction task. We estimate parameters with a coordinate ascent algorithm, one step of which involves training a discriminative loglinear model with an embedded generative model. This hybrid model can be used for test time prediction. Unlike other highperformance semisupervised methods, the proposed algorithm converges to a stationary point of a single objective function, and affords additional flexibility, for example to use different latent and output spaces. We conduct experiments on three sequence labeling tasks, achieving the best reported results on two of them, and showing promising results on CoNLL03 NER. 1.
SemiSupervised Learning via Generalized Maximum Entropy
"... Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms o ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semisupervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semisupervised algorithms, that are convex, scalable, inherently multiclass, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology. 1
Article Expansion of Protected Areas under Climate Change: An Example of Mountainous Tree Species in Taiwan
, 2014
"... www.mdpi.com/journal/forests ..."
(Show Context)
Improved Information Structure Analysis of Scientific Documents Through Discourse and Lexical Constraints
"... Inferring the information structure of scientific documents is useful for many downstream applications. Existing featurebased machine learning approaches to this task require substantial training data and suffer from limited performance. Our idea is to guide featurebased models with declarative do ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Inferring the information structure of scientific documents is useful for many downstream applications. Existing featurebased machine learning approaches to this task require substantial training data and suffer from limited performance. Our idea is to guide featurebased models with declarative domain knowledge encoded as posterior distribution constraints. We explore a rich set of discourse and lexical constraints which we incorporate through the Generalized Expectation (GE) criterion. Our constrained model improves the performance of existing fully and lightly supervised models. Even a fully unsupervised version of this model outperforms lightly supervised featurebased models, showing that our approach can be useful even when no labeled data is available. 1
Semisupervised Learning of Naive Bayes Classifier with
"... feature constraints ..."
(Show Context)