Results 1 
7 of
7
Integer linear programming inference for conditional random fields
 In Proc. of the International Conference on Machine Learning (ICML
, 2005
"... Inference in Conditional Random Fields and Hidden Markov Models is done using the Viterbi algorithm, an efficient dynamic programming algorithm. In many cases, general (nonlocal and nonsequential) constraints may exist over the output sequence, but cannot be incorporated and exploited in a natural ..."
Abstract

Cited by 69 (13 self)
 Add to MetaCart
Inference in Conditional Random Fields and Hidden Markov Models is done using the Viterbi algorithm, an efficient dynamic programming algorithm. In many cases, general (nonlocal and nonsequential) constraints may exist over the output sequence, but cannot be incorporated and exploited in a natural way by this inference procedure. This paper proposes a novel inference procedure based on integer linear programming (ILP) and extends CRF models to naturally and efficiently support general constraint structures. For sequential constraints, this procedure reduces to simple linear programming as the inference process. Experimental evidence is supplied in the context of an important NLP problem, semantic role labeling. 1.
The importance of syntactic parsing and inference in semantic role labeling
 COMPUTATIONAL LINGUISTICS
, 2008
"... We present a general framework for semantic role labeling. The framework combines a machine learning technique with an integer linear programming based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the ro ..."
Abstract

Cited by 49 (17 self)
 Add to MetaCart
We present a general framework for semantic role labeling. The framework combines a machine learning technique with an integer linear programming based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stage—the pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.
KNITRO: An integrated package for nonlinear optimization
 Large Scale Nonlinear Optimization, 35–59, 2006
, 2006
"... This paper describes Knitro 5.0, a Cpackage for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving largescale, smooth nonlinear programming problems ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
This paper describes Knitro 5.0, a Cpackage for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving largescale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an activeset method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings. 1
Global Inference Using Integer Linear Programming
, 2004
"... This report is a supplemental document of some of our papers [5, 3, 4]. It gives a simple but complete stepbystep case study, which demonstrates how we apply integer linear programming to solve a global inference problem in natural language processing. This framework first transforms an optimizati ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This report is a supplemental document of some of our papers [5, 3, 4]. It gives a simple but complete stepbystep case study, which demonstrates how we apply integer linear programming to solve a global inference problem in natural language processing. This framework first transforms an optimization problem into an integer linear program. The program can then be solved using
EXPLOITING STRUCTURE IN INTEGER PROGRAMS
, 2011
"... This dissertation argues the case for exploiting certain structures in integer linear programs. Integer linear programming is a wellknown optimisation problem, which seeks the optimum of a linear function of variables, whose values are required to be integral as well as to satisfy certain linear eq ..."
Abstract
 Add to MetaCart
This dissertation argues the case for exploiting certain structures in integer linear programs. Integer linear programming is a wellknown optimisation problem, which seeks the optimum of a linear function of variables, whose values are required to be integral as well as to satisfy certain linear equalities and inequalities. The state of the art in solvers for this problem is the “branch and bound ” approach. The performance of such solvers depends crucially on four types of inbuilt heuristics: primal, improvement, branching, and cutseparation or, more generally, bounding heuristics. Such heuristics in generalpurpose solvers have not, until recently, exploited structure in integer linear programs beyond the recognition of certain types of singlerow constraints. Many alternative approaches to integer linear programming can be cast in the following, novel framework. “Structure” in any integer linear program
Natural Language Decisions are Structured
, 2012
"... � Global decisions in which several local decisions play a role but there are mutual dependencies on their outcome. It is essential to make coherent decisions in a way that takes the interdependencies into account. Joint, Global Inference. TODAY: � How to support making global, coherent decisions � ..."
Abstract
 Add to MetaCart
� Global decisions in which several local decisions play a role but there are mutual dependencies on their outcome. It is essential to make coherent decisions in a way that takes the interdependencies into account. Joint, Global Inference. TODAY: � How to support making global, coherent decisions � How to learn models that are used, eventually, to make global decisions � A framework that allows one to exploit interdependencies among decision variables both in inference (decision making) and in learning. � Inference: A formulation for inference with expressive declarative knowledge. � Learning: Ability to learn simple models; amplify it power by exploiting interdependencies.
Learning and Inference for Information Extraction
, 2005
"... Information extraction is a process that extracts limited semantic concepts from text documents and presents them in an organized way. Unlike several other natural language tasks, information extraction has a direct impact on enduser applications. Despite its importance, information extraction is s ..."
Abstract
 Add to MetaCart
Information extraction is a process that extracts limited semantic concepts from text documents and presents them in an organized way. Unlike several other natural language tasks, information extraction has a direct impact on enduser applications. Despite its importance, information extraction is still a difficult task due to the inherent complexity and ambiguity of human languages. Moreover, mutual dependencies between local predictions of the target concepts further increase difficulty of the task. In order to enhance information extraction technologies, we develop general approaches for two aspects – relational feature generation and global inference with classifiers. It has been quite convincingly argued that relational learning is suitable in training a complicated natural language system. We propose a relational feature generation approach that facilitates relational learning through propositional learning algorithms. In particular, we develop a relational representation language to produce features in a data driven way. The resulting features capture the relational structures of a given domain, and therefore allow the learning algorithms to effectively learn the relational definitions of target concepts. Although the learned classifier can be used to directly predict the target concepts, conflicts between the labels of different target variables often occur due to imperfect classifiers. We propose