Results 1  10
of
170
A Fast PseudoBoolean Constraint Solver
, 2003
"... Linear PseudoBoolean (LPB) constraints denote inequalities between arithmetic sums of weighted Boolean functions and provide a significant extension of the modeling power of purely propositional constraints. They can be used to compactly describe many discrete EDA problems with constraints on linea ..."
Abstract

Cited by 101 (1 self)
 Add to MetaCart
Linear PseudoBoolean (LPB) constraints denote inequalities between arithmetic sums of weighted Boolean functions and provide a significant extension of the modeling power of purely propositional constraints. They can be used to compactly describe many discrete EDA problems with constraints on linearly combined, parameterized weights, yet also offer efficient search strategies for proving or disproving whether a satisfying solution exists. Furthermore, corresponding decision procedures can easily be extended for minimizing or maximizing an LPB objective function, thus providing a core optimization method for many problems in logic and physical synthesis. In this paper we review how recent advances in satisfiability (SAT) search can be extended for pseudoBoolean constraints and describe a new LPB solver that is based on generalized constraint propagation and conflictbased learning. We present a comparison with other, stateoftheart LPB solvers which demonstrates the overall efficiency of our method.
Selected topics in column generation
 Operations Research
, 2002
"... DantzigWolfe decomposition and column generation, devised for linear programs, is a success story in large scale integer programming. We outline and relate the approaches, and survey mainly recent contributions, not found in textbooks, yet. We emphasize on the growing understanding of the dual poin ..."
Abstract

Cited by 72 (5 self)
 Add to MetaCart
DantzigWolfe decomposition and column generation, devised for linear programs, is a success story in large scale integer programming. We outline and relate the approaches, and survey mainly recent contributions, not found in textbooks, yet. We emphasize on the growing understanding of the dual point of view, which brought considerable progress to the column generation theory and practice. It stimulated careful initializations, sophisticated solution techniques for restricted master problem and subproblem, as well as better overall performance. Thus, the dual perspective is an ever recurring concept in our "selected topics."
Review of nonlinear mixedinteger and disjunctive programming techniques
 Optimization and Engineering
, 2002
"... This paper has as a major objective to present a unified overview and derivation of mixedinteger nonlinear programming (MINLP) techniques, Branch and Bound, OuterApproximation, Generalized Benders and Extended Cutting Plane methods, as applied to nonlinear discrete optimization problems that are ex ..."
Abstract

Cited by 55 (15 self)
 Add to MetaCart
This paper has as a major objective to present a unified overview and derivation of mixedinteger nonlinear programming (MINLP) techniques, Branch and Bound, OuterApproximation, Generalized Benders and Extended Cutting Plane methods, as applied to nonlinear discrete optimization problems that are expressed in algebraic form. The solution of MINLP problems with convex functions is presented first, followed by a brief discussion on extensions for the nonconvex case. The solution of logic based representations, known as generalized disjunctive programs, is also described. Theoretical properties are presented, and numerical comparisons on a small process network problem.
Convex Nondifferentiable Optimization: A Survey Focussed On The Analytic Center Cutting Plane Method.
, 1999
"... We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct pr ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.
A Cutting Plane Method from Analytic Centers for Stochastic Programming
 Mathematical Programming
, 1994
"... The stochastic linear programming problem with recourse has a dual block angular structure. It can thus be handled by Benders decomposition or by Kelley's method of cutting planes; equivalently the dual problem has a primal block angular structure and can be handled by DantzigWolfe decomposition ..."
Abstract

Cited by 49 (18 self)
 Add to MetaCart
The stochastic linear programming problem with recourse has a dual block angular structure. It can thus be handled by Benders decomposition or by Kelley's method of cutting planes; equivalently the dual problem has a primal block angular structure and can be handled by DantzigWolfe decomposition the two approaches are in fact identical by duality. Here we shall investigate the use of the method of cutting planes from analytic centers applied to similar formulations. The only significant difference form the aforementioned methods is that new cutting planes (or columns, by duality) will be generated not from the optimum of the linear programming relaxation, but from the analytic center of the set of localization. 1 Introduction The study of optimization problems in the presence of uncertainty still taxes the limits of methodology and software. One of the most approachable settings is that of twostaged planning under uncertainty, in which a first stage decision has to be taken bef...
Quadratic Optimization
, 1995
"... . Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, t ..."
Abstract

Cited by 46 (3 self)
 Add to MetaCart
. Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, the quadratic problem is known to be NPhard, which makes this one of the most interesting and challenging class of optimization problems. In this chapter, we review various properties of the quadratic problem, and discuss different techniques for solving various classes of quadratic problems. Some of the more successful algorithms for solving the special cases of bound constrained and large scale quadratic problems are considered. Examples of various applications of quadratic programming are presented. A summary of the available computational results for the algorithms to solve the various classes of problems is presented. Key words: Quadratic optimization, bilinear programming, concave pro...
G.: Logicbased benders decomposition
 Mathematical Programming
, 2003
"... Benders decomposition uses a strategy of “learning from one’s mistakes.” The aim of this paper is to extend this strategy to a much larger class of problems. The key is to generalize the linear programming dual used in the classical method to an “inference dual. ” Solution of the inference dual take ..."
Abstract

Cited by 45 (10 self)
 Add to MetaCart
Benders decomposition uses a strategy of “learning from one’s mistakes.” The aim of this paper is to extend this strategy to a much larger class of problems. The key is to generalize the linear programming dual used in the classical method to an “inference dual. ” Solution of the inference dual takes the form of a logical deduction that yields Benders cuts. The dual is therefore very different from other generalized duals that have been proposed. The approach is illustrated by working out the details for propositional satisfiability and 01 programming problems. Computational tests are carried out for the latter, but the most promising contribution of logicbased Benders may be to provide a framework for combining optimization and constraint programming methods.
Planning in the Presence of Cost Functions Controlled By An Adversary
 In Proceedings of the Twentieth International Conference on Machine Learning
, 2003
"... We investigate methods for planning in a Markov Decision Process where the cost function is chosen by an adversary after we fix our policy. As a running example, we consider a robot path planning problem where costs are influenced by sensors that an adversary places in the environment. We formulate ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
We investigate methods for planning in a Markov Decision Process where the cost function is chosen by an adversary after we fix our policy. As a running example, we consider a robot path planning problem where costs are influenced by sensors that an adversary places in the environment. We formulate the problem as a zerosum matrix game where rows correspond to deterministic policies for the planning player and columns correspond to cost vectors the adversary can select.
Sequential and parallel algorithms for mixed packing and covering
 IN 42ND ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2001
"... We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (a.k.a. mixed packing and covering problems). For explicitly given problems, our fastest sequential algorithm returns a solution satisfying all constraints within a ¦ ¯ factor in Ç ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (a.k.a. mixed packing and covering problems). For explicitly given problems, our fastest sequential algorithm returns a solution satisfying all constraints within a ¦ ¯ factor in Ç Ñ � ÐÓ � Ñ � ¯ time, where Ñ is the number of constraints and � is the maximum number of constraints any variable appears in. Our parallel algorithm runs in time polylogarithmic in the input size times ¯ � and uses a total number of operations comparable to the sequential algorithm. The main contribution is that the algorithms solve mixed packing and covering problems (in contrast to pure packing or pure covering problems, which have only “� ” or only “� ” inequalities, but not both) and run in time independent of the socalled width of the problem.