Results 1  10
of
28
Coarsetofine nbest parsing and MaxEnt discriminative reranking
 In ACL
, 2005
"... Discriminative reranking is one method for constructing highperformance statistical parsers (Collins, 2000). A discriminative reranker requires a source of candidate parses for each sentence. This paper describes a simple yet novel method for constructing sets of 50best parses based on a co ..."
Abstract

Cited by 477 (14 self)
 Add to MetaCart
(Show Context)
Discriminative reranking is one method for constructing highperformance statistical parsers (Collins, 2000). A discriminative reranker requires a source of candidate parses for each sentence. This paper describes a simple yet novel method for constructing sets of 50best parses based on a coarsetofine generative parser (Charniak, 2000). This method generates 50best lists that are of substantially higher quality than previously obtainable.
Objectoriented software for quadratic programming
 ACM Transactions on Mathematical Software
, 2001
"... The objectoriented software package OOQP for solving convex quadratic programming problems (QP) is described. The primaldual interior point algorithms supplied by OOQP are implemented in a way that is largely independent of the problem structure. Users may exploit problem structure by supplying li ..."
Abstract

Cited by 78 (2 self)
 Add to MetaCart
(Show Context)
The objectoriented software package OOQP for solving convex quadratic programming problems (QP) is described. The primaldual interior point algorithms supplied by OOQP are implemented in a way that is largely independent of the problem structure. Users may exploit problem structure by supplying linear algebra, problem data, and variable classes that are customized to their particular applications. The OOQP distribution contains default implementations that solve several important QP problem types, including general sparse and dense QPs, boundconstrained QPs, and QPs arising from support vector machines and Huber regression. The implementations supplied with the OOQP distribution are based on such well known linear algebra packages as MA27/57, LAPACK, and PETSc. OOQP demonstrates the usefulness of objectoriented design in optimization software development, and establishes standards that can be followed in the design of software packages for other classes of optimization problems. A number of the classes in OOQP may also be reusable directly in other codes.
Discriminative language modeling with conditional random fields and the perceptron algorithm
 In Proc. ACL
, 2004
"... This paper describes discriminative language modeling for a large vocabulary speech recognition task. We contrast two parameter estimation methods: the perceptron algorithm, and a method based on conditional random fields (CRFs). The models are encoded as deterministic weighted finite state automata ..."
Abstract

Cited by 73 (7 self)
 Add to MetaCart
(Show Context)
This paper describes discriminative language modeling for a large vocabulary speech recognition task. We contrast two parameter estimation methods: the perceptron algorithm, and a method based on conditional random fields (CRFs). The models are encoded as deterministic weighted finite state automata, and are applied by intersecting the automata with wordlattices that are the output from a baseline recognizer. The perceptron algorithm has the benefit of automatically selecting a relatively small feature set in just a couple of passes over the training data. However, using the feature set output from the perceptron algorithm (initialized with their weights), CRF training provides an additional 0.5 % reduction in word error rate, for a total 1.8 % absolute reduction from the baseline of 39.2%. 1
A scalable modular convex solver for regularized risk minimization
 In KDD. ACM
, 2007
"... A wide variety of machine learning problems can be described as minimizing a regularized risk functional, with different algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Machines (SVMs), Logistic Regression, Conditional Random Fields (CRFs ..."
Abstract

Cited by 68 (13 self)
 Add to MetaCart
(Show Context)
A wide variety of machine learning problems can be described as minimizing a regularized risk functional, with different algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Machines (SVMs), Logistic Regression, Conditional Random Fields (CRFs), and Lasso amongst others. This paper describes the theory and implementation of a highly scalable and modular convex solver which solves all these estimation problems. It can be parallelized on a cluster of workstations, allows for datalocality, and can deal with regularizers such as ℓ1 and ℓ2 penalties. At present, our solver implements 20 different estimation problems, can be easily extended, scales to millions of observations, and is up to 10 times faster than specialized solvers for many applications. The open source code is freely available as part of the ELEFANT toolbox.
A Component Architecture for HighPerformance Scientific Computing
 Intl. J. HighPerformance Computing Applications
, 2004
"... The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of largescale scientific simulations and to move toward a plugandplay environment for highperformance computing. In the scientific computing context, component models also promote collaborat ..."
Abstract

Cited by 58 (20 self)
 Add to MetaCart
The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of largescale scientific simulations and to move toward a plugandplay environment for highperformance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local highperformance connections between components in a languageindependent manner. The design places minimal requirements on components
PENNON  a code for convex nonlinear and semidefinite programming
 Optimization Methods and Software
"... We introduce a computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite Programming (NLPSDP). The algorithm used in PENNON is a generalized version of the Augmented Lagrangian method, originally introduced by BenTal and Zibulevsky for convex NLP problems. We presen ..."
Abstract

Cited by 57 (11 self)
 Add to MetaCart
(Show Context)
We introduce a computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite Programming (NLPSDP). The algorithm used in PENNON is a generalized version of the Augmented Lagrangian method, originally introduced by BenTal and Zibulevsky for convex NLP problems. We present generalization of this algorithm to convex NLPSDP problems, as implemented in PENNON and details of its implementation. The code can also solve secondorder conic programming (SOCP) problems, as well as problems with a mixture of SDP, SOCP and NLP constraints. Results of extensive numerical tests and comparison with other optimization codes are presented. The test examples show that PENNON is particularly suitable for large sparse problems. 1
Evaluation and Extension of Maximum Entropy Models with Inequality Constraints
, 2003
"... A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations. ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
A maximum entropy (ME) model is usually estimated so that it conforms to equality constraints on feature expectations.
Parallel components for PDEs and optimization: Some issues and experiences
 PARALLEL COMPUTING
, 2002
"... Highperformance simulations in computational science often involve the combined software contributions of multidisciplinary teams of scientists, engineers, mathematicians, and computer scientists. One goal of componentbased software engineering in largescale scientific simulations is to help mana ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
Highperformance simulations in computational science often involve the combined software contributions of multidisciplinary teams of scientists, engineers, mathematicians, and computer scientists. One goal of componentbased software engineering in largescale scientific simulations is to help manage such complexity by enabling better interoperability among codes developed by different groups. This paper discusses recent work on building component interfaces and implementations in parallel numerical toolkits for mesh manipulations, discretization, linear algebra, and optimization. We consider several motivating applications involving partial differential equations and unconstrained minimization to demonstrate this approach and evaluate performance.
A differentiationenabled Fortran 95 compiler
 CODEN ACMSCU. ISSN 00983500 (print), 15577295 (electronic). 215 Tang:2005:DNI
, 2005
"... ..."
Parallel PDEbased simulations using the common component architecture
 In Are Magnus Bruaset, Petter Bjørstad, and Aslak Tveito, editors, Numerical Solution of PDEs on Parallel Computers, volume 51 of Lecture Notes in Computational Science and Engineering (LNCSE
, 2006
"... Summary. The complexity of parallel PDEbased simulations continues to increase as multimodel, multiphysics, and multiinstitutional projects become widespread. A goal of componentbased software engineering in such largescale simulations is to help manage this complexity by enabling better interope ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Summary. The complexity of parallel PDEbased simulations continues to increase as multimodel, multiphysics, and multiinstitutional projects become widespread. A goal of componentbased software engineering in such largescale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of highperformance scientific computing. In addition, several execution frameworks, supporting infrastructure, and generalpurpose components are being developed. Furthermore, this group is collaborating with others in the highperformance computing community to design suites of domainspecific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDEbased simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges