Results 1  10
of
34
Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
 SIAM Journal on Optimization
, 2004
"... A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for gene ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for general nonlinear constraints. In generalizing existing algorithms, new theoretical convergence results are presented that reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are required to apply the algorithm, a hierarchy of theoretical convergence results based on the Clarke calculus is given, in which local smoothness dictate what can be proved about certain limit points generated by the algorithm. To demonstrate the usefulness of the algorithm, the algorithm is applied to the design of a loadbearing thermal insulation system. We believe this is the first algorithm with provable convergence results to directly target this class of problems.
An exact reformulation algorithm for large nonconvex NLPs involving bilinear terms
 Journal of Global Optimization
, 2005
"... Many nonconvex nonlinear programming (NLP) problems of practical interest involve bilinear terms and linear constraints, as well as, potentially, other convex and nonconvex terms and constraints. In such cases, it may be possible to augment the formulation with additional linear constraints (a subse ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
Many nonconvex nonlinear programming (NLP) problems of practical interest involve bilinear terms and linear constraints, as well as, potentially, other convex and nonconvex terms and constraints. In such cases, it may be possible to augment the formulation with additional linear constraints (a subset of ReformulationLinearization Technique constraints) which do not a#ect the feasible region of the original NLP but tighten that of its convex relaxation to the extent that some bilinear terms may be dropped from the problem formulation. We present an e#cient graphtheoretical algorithm for e#ecting such exact reformulations of large, sparse NLPs. The global solution of the reformulated problem using spatial Branchand Bound algorithms is usually significantly faster than that of the original NLP. We illustrate this point by applying our algorithm to a set of pooling and blending global optimization problems.
Parameterized LMIs in Control Theory
 SIAM J. Control Optim
, 1998
"... A wide variety of problems in control system theory fall within the class of parameterized Linear Matrix Inequalities (LMIs), that is, LMIs whose coefficients are functions of a parameter conned to a compact set. Such problems, though convex, involve an innite set of LMI constraints, hence are inher ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
A wide variety of problems in control system theory fall within the class of parameterized Linear Matrix Inequalities (LMIs), that is, LMIs whose coefficients are functions of a parameter conned to a compact set. Such problems, though convex, involve an innite set of LMI constraints, hence are inherently difficult to solve numerically. This paper investigates relaxations of parameterized LMI problems into standard LMI problems using techniques relying on directional convexity concepts. An indepth discussion of the impacts of the proposed techniques in quadratic programming, Lyapunovbased stability and performance analysis, µ analysis and Linear Parameter Varying control is provided. Illustrative examples are given to demonstrate the usefulness and practicality of the approach.
Relaxations of Parameterized LMIs with Control Applications
 International J. of Nonlinear Robust Controls
, 1998
"... . A wide variety of problems in control system theory fall within the class of parameterized Linear Matrix Inequalities (LMIs), that is, LMIs whose coefficients are functions of a parameter confined to a compact set. However, in contrast to LMIs, parameterized LMI (PLMIs) feasibility problems involv ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
. A wide variety of problems in control system theory fall within the class of parameterized Linear Matrix Inequalities (LMIs), that is, LMIs whose coefficients are functions of a parameter confined to a compact set. However, in contrast to LMIs, parameterized LMI (PLMIs) feasibility problems involve infinitely many LMIs hence are very hard to solve. In this paper, we propose several effective relaxation techniques to replace PLMIs by a finite set of LMIs. The resulting relaxed feasibility problems thus become convex and hence can be solved by very efficient interior point methods. Applications of these techniques to different problems such as robustness analysis, or Linear ParameterVarying (LPV) control are then thoroughly discussed and illustrated by examples. 1 Introduction Linear matrix inequalities (LMIs) have emerged as a very powerful tool in the analysis and synthesis for robust control problems (see e.g. [6, 8, 12] and references therein). From a computational point of view,...
Reformulation and Convex Relaxation Techniques for Global Optimization
 4OR
, 2004
"... Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested i ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested in determining the globally optimal point. This thesis is concerned with techniques for establishing such global optima using spatial BranchandBound (sBB) algorithms.
D.C. Optimization Approach to Robust Control: Feasibility Problems
 J. of Control
, 1997
"... . The feasibility problem for constant scaling in output feedback control is considered. This is an inherently difficult problem [20, 21] since the set of feasible solutions is nonconvex and may be disconnected. Nevertheless, we show that this problem can be reduced to the global maximization of a c ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
. The feasibility problem for constant scaling in output feedback control is considered. This is an inherently difficult problem [20, 21] since the set of feasible solutions is nonconvex and may be disconnected. Nevertheless, we show that this problem can be reduced to the global maximization of a concave function over a convex set, or alternatively, to the global minimization of a convex program with an additional reverse convex constraint. Thus this feasiblity problem belongs to the realm of d.c. optimization [14, 15, 32, 33], a new field which has recently emerged as an active promising research direction in nonconvex global optimization. By exploiting the specific d.c. structure of the problem, several algorithms are proposed which at every iteration require solving only either convex or linear subproblems. Analogous algorithms with new characterizations are proposed for the Bilinear Matrix Inequality (BMI) feasibility problem. 1 Introduction Consider the system given by Fig.1, ...
Complexity Analysis of Successive Convex Relaxation Methods for Nonconvex Sets
 Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology
, 1999
"... . This paper discusses computational complexity of conceptual successive convex relaxation methods proposed by Kojima and Tun¸cel for approximating a convex relaxation of a compact subset F = fx 2 C 0 : p(x) 0 (8p(\Delta) 2 P F )g of the ndimensional Euclidean space R n . Here C 0 denotes a none ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
. This paper discusses computational complexity of conceptual successive convex relaxation methods proposed by Kojima and Tun¸cel for approximating a convex relaxation of a compact subset F = fx 2 C 0 : p(x) 0 (8p(\Delta) 2 P F )g of the ndimensional Euclidean space R n . Here C 0 denotes a nonempty compact convex subset of R n , and P F a set of finitely or infinitely many quadratic functions. We evaluate the number of iterations which the successive convex relaxation methods require to attain a convex relaxation of F with a given accuracy ffl, in terms of ffl, the diameter of C 0 , the diameter of F , and some other quantities characterizing the Lipschitz continuity, the nonlinearity and the nonconvexity of the set P F of quadratic functions. Keywords: Complexity, Nonconvex Quadratic Program, Semidefinite Programming, Global Optimization, SDP Relaxation, Convex Relaxation, LiftandProject Procedure. 1 Introduction. In their paper [2], Kojima and Tun¸cel proposed a class of...
Adjustable robust optimization models for nonlinear multiperiod optimization
, 2004
"... We study multiperiod nonlinear optimization problems whose parameters are uncertain. We assume that uncertain parameters are revealed in stages and model them using the adjustable robust optimization approach. For problems with polytopic uncertainty, we show that quasiconvexity of the optimal valu ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study multiperiod nonlinear optimization problems whose parameters are uncertain. We assume that uncertain parameters are revealed in stages and model them using the adjustable robust optimization approach. For problems with polytopic uncertainty, we show that quasiconvexity of the optimal value function of certain subproblems is sufficient for the reducibility of the resulting robust optimization problem to a singlelevel deterministic problem. We relate this sufficient condition to the quasi coneconvexity of the feasible set mapping for adjustable variables and present several examples and applications satisfying these conditions. 1
Robust and reducedorder filtering: new LMIbased characterizations and methods
 IEEE Transactions on Signal Processing
, 2001
"... Several challenging problems of robust filtering are addressed in this paper. First of all, we exploit a new LMI (Linear Matrix Inequality) characterization of minimum variance or of H2 performance, and demonstrate that it allows the use of parameterdependent Lyapunov functions while preserving trac ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Several challenging problems of robust filtering are addressed in this paper. First of all, we exploit a new LMI (Linear Matrix Inequality) characterization of minimum variance or of H2 performance, and demonstrate that it allows the use of parameterdependent Lyapunov functions while preserving tractability of the problem. The resulting conditions are less conservative than earlier techniques which are restricted to a fixed, that is not depending on parameters, Lyapunov functions. The rest of the paper is focusing on reducedorder filter problems. New LMIbased nonconvex optimization formulations are introduced for the existence of reducedorder filters. Then, several efficient optimization algorithms of local and global optimization are proposed. Nontrivial and less conservative relaxation techniques are discussed as well. The viability and efficiency of the proposed tools are confirmed through computational experiments and also through comparisons with earlier methods. 1
Concavity Cuts for Disjoint Bilinear Programming
 MATHEMATICAL PROGRAMMING
, 2001
"... We pursue the study of concavity cuts for the disjoint bilinear programming problem. This optimization problem has two equivalent symmetric linear maxmin reformulations, leading to two sets of concavity cuts. We first examine the depth of these cuts by considering the assumptions on the boundedness ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We pursue the study of concavity cuts for the disjoint bilinear programming problem. This optimization problem has two equivalent symmetric linear maxmin reformulations, leading to two sets of concavity cuts. We first examine the depth of these cuts by considering the assumptions on the boundedness of the feasible regions of both maxmin and bilinear formulations. We next propose a branch and bound algorithm which make use of concavity cuts. We also present a procedure that eliminates degenerate solutions. Extensive computational experiences are reported. Sparse problems with up to 500 variables in each disjoint sets and 100 constraints, and dense problems with up to 60 variables again in each sets and 60 constraints are solved in reasonable computing times.