Results 1  10
of
28
Objectoriented software for quadratic programming
 ACM Transactions on Mathematical Software
, 2001
"... The objectoriented software package OOQP for solving convex quadratic programming problems (QP) is described. The primaldual interior point algorithms supplied by OOQP are implemented in a way that is largely independent of the problem structure. Users may exploit problem structure by supplying li ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
The objectoriented software package OOQP for solving convex quadratic programming problems (QP) is described. The primaldual interior point algorithms supplied by OOQP are implemented in a way that is largely independent of the problem structure. Users may exploit problem structure by supplying linear algebra, problem data, and variable classes that are customized to their particular applications. The OOQP distribution contains default implementations that solve several important QP problem types, including general sparse and dense QPs, boundconstrained QPs, and QPs arising from support vector machines and Huber regression. The implementations supplied with the OOQP distribution are based on such well known linear algebra packages as MA27/57, LAPACK, and PETSc. OOQP demonstrates the usefulness of objectoriented design in optimization software development, and establishes standards that can be followed in the design of software packages for other classes of optimization problems. A number of the classes in OOQP may also be reusable directly in other codes.
Reoptimization with the PrimalDual Interior Point Method
, 2001
"... Reoptimization techniques for an interior point method applied to solve a sequence of linear programming problems are discussed. Conditions are given for problem perturbations that can be absorbed in merely one Newton step. The analysis is performed for both shortstep and longstep feasible pathf ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
Reoptimization techniques for an interior point method applied to solve a sequence of linear programming problems are discussed. Conditions are given for problem perturbations that can be absorbed in merely one Newton step. The analysis is performed for both shortstep and longstep feasible pathfollowing method. A practical procedure is then derived for an infeasible pathfollowing method. It is applied in the context of crash start for several largescale structured linear programs. Numerical results with OOPS, the new objectoriented parallel solver demonstrate the efficiency of the approach. For large structured linear programs crash start leads to about 40% reduction of the iterations number and translates into 25% reduction of the solution time. The crash procedure parallelizes well and speedups between 3.13.8 on 4 processors are achieved.
Exploiting Structure in Parallel Implementation of Interior Point Methods for Optimization
 School of Mathematics, University of Edinburgh, Edinburgh
, 2004
"... ..."
A structureconveying modelling language for mathematical and stochastic programming
 Mathematical Programming Computation
, 2009
"... We present a structureconveying algebraic modelling language for mathematical programming. The proposed language extends AMPL with objectoriented features that allows the user to construct models from submodels, and is implemented as a combination of preand postprocessing phases for AMPL. Unlike ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We present a structureconveying algebraic modelling language for mathematical programming. The proposed language extends AMPL with objectoriented features that allows the user to construct models from submodels, and is implemented as a combination of preand postprocessing phases for AMPL. Unlike traditional modelling languages, the new approach does not scramble the block structure of the problem, and thus it enables the passing of this structure on to the solver. Interior point solvers that exploit block linear algebra and decompositionbased solvers can therefore directly take advantage of the problem’s structure. The language contains features to conveniently model stochastic programming problems, although it is designed with a much broader application spectrum. 1
Efficient robust optimization for robust control with constraints
 Mathematical Programming, Series A:1–33
, 2007
"... This paper proposes an efficient computational technique for the optimal control of linear discretetime systems subject to bounded disturbances with mixed polytopic constraints on the states and inputs. The problem of computing an optimal state feedback control policy, given the current state, is n ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper proposes an efficient computational technique for the optimal control of linear discretetime systems subject to bounded disturbances with mixed polytopic constraints on the states and inputs. The problem of computing an optimal state feedback control policy, given the current state, is nonconvex. A recent breakthrough has been the application of robust optimization techniques to reparameterise this problem as a convex program. While the reparameterised problem is theoretically tractable, the number of variables is quadratic in the number of stages or horizon length N and has no apparent exploitable structure, leading to computational time of O(N 6) per iteration of an interiorpoint method. We focus on the case when the disturbance set is ∞norm bounded or the linear map of a hypercube, and the cost function involves the minimization of a quadratic cost. Here we make use of state variables to regain a sparse problem structure that is related to the structure of the original problem, that is, the policy optimization problem may be decomposed into a set of coupled finite horizon control problems. This decomposition can then be formulated as a highly structured quadratic program, solvable by primaldual interiorpoint methods in which each iteration requires O(N 3) time. This cubic iteration time can be guaranteed using a Riccatibased block factorization technique, which is standard in discretetime optimal control. Numerical results are presented, using a standard sparse primaldual interior point solver, which illustrate the efficiency of this approach.
A preconditioning technique for Schur complement systems arising in stochastic optimization
"... ..."
A decompositionbased warmstart method for stochastic programming
, 2009
"... In this paper we propose a warmstart technique for interior point methods applicable to multistage stochastic linear programming problems. The main idea is to generate an initial point by decomposing the problem at the second stage and using an approximate solution of the subproblems as a starting ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper we propose a warmstart technique for interior point methods applicable to multistage stochastic linear programming problems. The main idea is to generate an initial point by decomposing the problem at the second stage and using an approximate solution of the subproblems as a starting point for the complete instance. We analyse this scheme and produce theoretical conditions under which the warmstart iterate is successful. We describe the implementation within the OOPS solver and the results of the numerical tests we performed. 1
Hybrid mpi/openmp parallel linear support vector machine training
 JMLR
"... Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally challenging. A parallel implementation of linear Support Vector Machine training has been developed, using a combination of MPI and Open ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally challenging. A parallel implementation of linear Support Vector Machine training has been developed, using a combination of MPI and OpenMP. Using an interior point method for the optimization and a reformulation that avoids the dense Hessian matrix, the structure of the augmented system matrix is exploited to partition data and computations amongst parallel processors efficiently. The new implementation has been applied to solve problems from the PASCAL Challenge on Largescale Learning. We show that our approach is competitive, and is able to solve problems in the Challenge many times faster than other parallel approaches. We also demonstrate that the hybrid version performs more efficiently than the version using pure MPI.
Structuring modeling technology
 European Journal of Operational Research
, 2004
"... This paper presents the methodological background and implementation of a structured modeling environment developed to meet the requirements of modeling activities undertaken to support intergovernmental negotiations aimed at improving European air quality. Although the motivation for the reported w ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This paper presents the methodological background and implementation of a structured modeling environment developed to meet the requirements of modeling activities undertaken to support intergovernmental negotiations aimed at improving European air quality. Although the motivation for the reported work came from the actual complex application presented in the paper, the actual scope of the paper covers a wide range of issues related to modelbased decisionmaking support. The paper starts with a summary of the context of modeling composed of: the role of models in decisionmaking support; modeling paradigms; and stateoftheart aspects of modeling complex problems. The modeling process is then characterized, and the requirement analysis for implementation of structured modeling is specified. The main part of the paper presents the structured modeling technology which was developed to support the implementation of the structured modeling principles for modeling complex problems.
Inexact Coordinate Descent: Complexity and Preconditioning ∗
, 2013
"... In this paper we consider the problem of minimizing a convex function using a randomized block coordinate descent method. One of the key steps at each iteration of the algorithm is determining the update to a block of variables. Existing algorithms assume that in order to compute the update, a parti ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we consider the problem of minimizing a convex function using a randomized block coordinate descent method. One of the key steps at each iteration of the algorithm is determining the update to a block of variables. Existing algorithms assume that in order to compute the update, a particular subproblem is solved exactly. In his work we relax this requirement, and allow for the subproblem to be solved inexactly, leading to an inexact block coordinate descent method. Our approach incorporates the best known results for exact updates as a special case. Moreover, these theoretical guarantees are complemented by practical considerations: the use of iterative techniques to determine the update as well as the use of preconditioning for further acceleration.