Results 1  10
of
25
The Quadratic Assignment Problem: A Survey and Recent Developments
 In Proceedings of the DIMACS Workshop on Quadratic Assignment Problems, volume 16 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1994
"... . Quadratic Assignment Problems model many applications in diverse areas such as operations research, parallel and distributed computing, and combinatorial data analysis. In this paper we survey some of the most important techniques, applications, and methods regarding the quadratic assignment probl ..."
Abstract

Cited by 91 (16 self)
 Add to MetaCart
. Quadratic Assignment Problems model many applications in diverse areas such as operations research, parallel and distributed computing, and combinatorial data analysis. In this paper we survey some of the most important techniques, applications, and methods regarding the quadratic assignment problem. We focus our attention on recent developments. 1. Introduction Given a set N = f1; 2; : : : ; ng and n \Theta n matrices F = (f ij ) and D = (d kl ), the quadratic assignment problem (QAP) can be stated as follows: min p2\Pi N n X i=1 n X j=1 f ij d p(i)p(j) + n X i=1 c ip(i) ; where \Pi N is the set of all permutations of N . One of the major applications of the QAP is in location theory where the matrix F = (f ij ) is the flow matrix, i.e. f ij is the flow of materials from facility i to facility j, and D = (d kl ) is the distance matrix, i.e. d kl represents the distance from location k to location l [62, 67, 137]. The cost of simultaneously assigning facility i to locat...
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkarâ€™s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Quadratic Optimization
, 1995
"... . Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, t ..."
Abstract

Cited by 46 (3 self)
 Add to MetaCart
. Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, the quadratic problem is known to be NPhard, which makes this one of the most interesting and challenging class of optimization problems. In this chapter, we review various properties of the quadratic problem, and discuss different techniques for solving various classes of quadratic problems. Some of the more successful algorithms for solving the special cases of bound constrained and large scale quadratic problems are considered. Examples of various applications of quadratic programming are presented. A summary of the available computational results for the algorithms to solve the various classes of problems is presented. Key words: Quadratic optimization, bilinear programming, concave pro...
Continuous Characterizations of the Maximum Clique Problem
 Math. Oper. Res
, 1996
"... Given a graph G whose adjacency matrix is A, the MotzkinStrauss formulation of the MaximumClique Problem is the quadratic program maxfx T Axjx T e = 1; x 0g. It is well known that the global optimum value of this QP is (1 \Gamma 1=!(G)), where !(G) is the clique number of G. Here, we characte ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
Given a graph G whose adjacency matrix is A, the MotzkinStrauss formulation of the MaximumClique Problem is the quadratic program maxfx T Axjx T e = 1; x 0g. It is well known that the global optimum value of this QP is (1 \Gamma 1=!(G)), where !(G) is the clique number of G. Here, we characterize the following: 1) first order optimality 2) second order optimality 3) local optimality 4) strict local. These characterizations reveal interesting underlying discrete structures, and are polynomial time verifiable. A parametrization of the MotzkinStrauss QP is then introduced and its properties are investigated. Finally, an extension of the MotzkinStrauss formulation is provided for the weighted clique number of a graph and this is used to derive a maximin characterization of perfect graphs. 1 Introduction 1.1 The Problem of Interest Let A be the adjacency matrix of a graph G. Consider the MotzkinStrauss formulation (also called the MotzkinStrauss QP) of the Maximum Clique Pro...
Userâ€™s Guide For QPOPT 1.0: A Fortran Package For Quadratic Programming
, 1995
"... QPOPT is a set of Fortran subroutines for minimizing a general quadratic function subject to linear constraints and simple upper and lower bounds. QPOPT may also be used for linear programming and for finding a feasible point for a set of linear equalities and inequalities. If the quadratic function ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
QPOPT is a set of Fortran subroutines for minimizing a general quadratic function subject to linear constraints and simple upper and lower bounds. QPOPT may also be used for linear programming and for finding a feasible point for a set of linear equalities and inequalities. If the quadratic function is convex (i.e., the Hessian is positive definite or positive semidefinite), the solution obtained will be a global minimizer. If the quadratic is nonconvex (i.e., the Hessian is indefinite), the solution obtained will be a local minimizer or a deadpoint. A twophase activeset method is used. The first phase minimizes the sum of infeasibilities. The second phase minimizes the quadratic function within the feasible region, using a reduced Hessian to obtain search directions. The method is most efficient when many constraints or bounds are active at the solution. QPOPT is not intended for large sparse problems, but there is no fixed limit on problem size. The source code is suitable for all scientific machines with a Fortran 77
Piecewise Sequential Quadratic Programming For Mathematical Programs With . . .
"... We describe some first and secondorder optimality conditions for mathematical programs with equilibrium constraints (MPEC). Mathematical programs with parametric nonlinear complementarity constraints are the focus. Of interest is the result that under a linear independence assumption that is stand ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We describe some first and secondorder optimality conditions for mathematical programs with equilibrium constraints (MPEC). Mathematical programs with parametric nonlinear complementarity constraints are the focus. Of interest is the result that under a linear independence assumption that is standard in nonlinear programming, the otherwise combinatorial problem of checking whether a point is stationary for an MPEC is reduced to checking stationarity of single nonlinear program. We also present a piecewise sequential quadratic programming (PSQP) algorithm for solving MPEC. Local quadratic convergence is shown under the linear independence assumption and a secondorder sufficient condition. Some computational results are given. KEY WORDS MPEC, bilevel program, nonlinear complementarity problem, nonlinear program, first and secondorder optimality conditions, linear independence constraint qualification, sequential quadratic programming, quadratic convergence. 2 Chapter 1 1 INTRODUC...
SecondOrder Sufficient Optimality Conditions for Local and Global Nonlinear Programming
, 1994
"... This paper presents a new approach to the sufficient conditions of nonlinear programming. Main result is a sufficient condition for the global optimality of a KuhnTucker point. This condition can be verified constructively, using a novel convexity test based on interval analysis, and is guaranteed ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
This paper presents a new approach to the sufficient conditions of nonlinear programming. Main result is a sufficient condition for the global optimality of a KuhnTucker point. This condition can be verified constructively, using a novel convexity test based on interval analysis, and is guaranteed to prove global optimality of strong local minimizers for sufficiently narrow bounds. Hence it is expected to be a useful tool within branch and bound algorithms for global optimization.
Reformulation and Convex Relaxation Techniques for Global Optimization
 4OR
, 2004
"... Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested i ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested in determining the globally optimal point. This thesis is concerned with techniques for establishing such global optima using spatial BranchandBound (sBB) algorithms.
Global Optimization of Nonconvex Nonlinear Programs Using Parallel Branch and Bound
, 1995
"... A branch and bound algorithm for computing globally optimal solutions to nonconvex nonlinear programs in continuous variables is presented. The algorithm is directly suitable for a wide class of problems arising in chemical engineering design. It can solve problems defined using algebraic functions ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
A branch and bound algorithm for computing globally optimal solutions to nonconvex nonlinear programs in continuous variables is presented. The algorithm is directly suitable for a wide class of problems arising in chemical engineering design. It can solve problems defined using algebraic functions and twice differentiable transcendental functions, in which finite upper and lower bounds can be placed on each variable. The algorithm uses rectangular partitions of the variable domain and a new bounding program based on convex/concave envelopes and positive definite combinations of quadratic terms. The algorithm is deterministic and obtains convergence with final regions of finite size. The partitioning strategy uses a sensitivity analysis of the bounding program to predict the best variable to split and the split location. Two versions of the algorithm are considered, the first using a local NLP algorithm (MINOS) and the second using a sequence of lower bounding programs in the search fo...
Iteration Algorithm for Computing Bounds in Quadratic Optimization Problems
 Complexity in Numerical Optimization
, 1993
"... We consider the problem of optimizing a quadratic function subject to integer constraints. This problem is NPhard in the general case. We present a new polynomial time algorithm for computing bounds on the solutions to such optimization problems. We transform the problem into a problem for minimizi ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We consider the problem of optimizing a quadratic function subject to integer constraints. This problem is NPhard in the general case. We present a new polynomial time algorithm for computing bounds on the solutions to such optimization problems. We transform the problem into a problem for minimizing the trace of a matrix subject to positive definiteness condition. We then propose an interiorpoint method to solve this problem. We show that the algorithm takes no more than O(nL) iterations (where L is the the number of bits required to represent the input). The algorithm does two matrix inversions in each iteration . Keywords: Bounds, complexity, quadratic optimization, interior point methods. 1 Outline The second section of the paper shall introduce the problem of computing upper bounds on a quadratic optimization problem. We shall also motivate an interior point approach to solving the problem. The third section gives an interior point method for solving the problem. The algorith...