## The Theory And Applications Of Discrete Constrained Optimization Using Lagrange Multipliers (2000)

Citations: | 4 - 0 self |

### BibTeX

@TECHREPORT{Wu00thetheory,

author = {Zhe Wu},

title = {The Theory And Applications Of Discrete Constrained Optimization Using Lagrange Multipliers},

institution = {},

year = {2000}

}

### OpenURL

### Abstract

In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated first-order search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixed-integer space. The constrained problems are general in the sense that they do not assume the differentiability or convexity of functions. Our proposed theory and methods are targeted at discrete problems and can be extended to continuous and mixed-integer problems by coding continuous variables using a floating-point representation (discretization). We have characterized the errors incurred due to such discretization and have proved that there exists upper bounds on the errors. Hence, continuous and mixed-integer constrained problems, as well as discrete ones, can be handled by DLM in a unified way with bounded errors.

### Citations

11494 |
Computers and Intractability, a Guide to the Theory of NP-Completeness
- Garey, Johnson
- 1979
(Show Context)
Citation Context ...ear constrained optimization problems is to find feasible solutions that satisfy all the constraints. This is not an easy task because nonlinear constrained optimization problems are normally NP-hard =-=[63]-=-. In practice, the di#culties in solving a nonlinear constrained optimization problem arise from the challenge of searching a huge variable space in order to locate feasible points with desirable solu... |

8306 |
Genetic Algorithm
- Goldberg
- 1989
(Show Context)
Citation Context ... an optimal solution with high probabilities. Further, because SA allows up-hill moves, it is generally not as e#cient as local search methods that only accept down-hill moves. Genetic algorithm (GA) =-=[80, 59, 133, 161, 142, 146, 93], a typica-=-l global optimization algorithm with reachability, roots itself in nature's rule of "fitness to survive." As summarized in [135, 133, 132], a genetic algorithm has five basic components: a) ... |

4050 |
Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images
- Geman, Geman
- 1984
(Show Context)
Citation Context ...lues of trial points. On the other hand, when T approaches zero at the end of the annealing process, only trial points with better function values will be accepted. A necessary and su#cient condition =-=[66]-=- for SA to converge to an unconstrained global optimum requires T to be decreased at a rate inversely proportional to a logarithmic function of time, given a su#ciently large initial temperature T 0 :... |

3931 | Optimization by Simulated Annealing
- Kirkpatrick, Gelatt, et al.
- 1983
(Show Context)
Citation Context ... chance to find high-quality solutions. Last, global-optimization methods can find global optima for unconstrained problems or constrained global optima for constrained problems when the methods stop =-=[118, 132, 19, 147, 30, 232, 213, 214]-=-. . Deterministic versus stochastic search methods. An iterative search procedure is deterministic [106, 235] if each probe is generated deterministically by the procedure. Otherwise, it is called a p... |

2215 |
Genetic Algorithms + Data Structures = Evolution Programs
- Michalewicz
- 1996
(Show Context)
Citation Context ... chance to find high-quality solutions. Last, global-optimization methods can find global optima for unconstrained problems or constrained global optima for constrained problems when the methods stop =-=[118, 132, 19, 147, 30, 232, 213, 214]-=-. . Deterministic versus stochastic search methods. An iterative search procedure is deterministic [106, 235] if each probe is generated deterministically by the procedure. Otherwise, it is called a p... |

1168 |
Linear and Nonlinear Programming
- Luenberger
- 1984
(Show Context)
Citation Context ...of a finite number of its neighbors. This feature allows the search of a descent direction to be done by enumeration rather than by di#erentiation. In contrast, point x is a constrained local minimum =-=[128]-=- in continuous space if and only if, for any feasible x # # N cn (x), f(x # ) # f(x) holds true. Unlike the definition of N dn (x) in discrete space, N cn (x) is well defined and unique. Thus, in cont... |

1123 |
A computing procedure for quantification theory
- Davis, Putnam
- 1960
(Show Context)
Citation Context ...cision formulations , defined in (6.1), entail the search of solutions that can satisfy all the clauses. Existing methods in this class are generally complete 147 methods. Examples include resolution =-=[14, 157, 68, 45]-=-, backtracking [154], and consistency testing [84, 89]. Due to the exhaustive nature of these search methods, they are expensive to use and normally have di#culty to address large-size problems. There... |

977 |
A machine-oriented logic based on the resolution principle
- Robinson
- 1965
(Show Context)
Citation Context ...cision formulations , defined in (6.1), entail the search of solutions that can satisfy all the clauses. Existing methods in this class are generally complete 147 methods. Examples include resolution =-=[14, 157, 68, 45]-=-, backtracking [154], and consistency testing [84, 89]. Due to the exhaustive nature of these search methods, they are expensive to use and normally have di#culty to address large-size problems. There... |

946 |
Accuracy and Stability of Numerical Algorithms
- Higham
- 1996
(Show Context)
Citation Context ....1 Characteristics of Floating-Point Representations A typical floating-point number y on a digital computer consists of the mantissa d and the exponent e, each represented by a finite number of bits =-=[95]-=-: y = # e .d 1 d 2 . . . d t , (3.3) 59 where # is the base (also called the radix), t is the precision, and e is the exponent in a range determined by [e min , e max ]. Each digit, d i , in (3.3) sat... |

707 | A new method for solving hard satisfiability problems
- Selman, Levesque, et al.
- 1992
(Show Context)
Citation Context ...lems. The various strategies proposed include iterative perturbation of trajectories and randomized search for overcoming local minima [188, 189, 184, 86, 85, 89, 87, 88]. Selman et al. proposed GSAT =-=[175]-=-, a randomized local search that takes the best possible moves whenever possible, and that randomly picks one variable and flips its assignment when there are several moves (or flips of variables) wit... |

667 | Tabu Search
- Glover, Laguna
- 1997
(Show Context)
Citation Context ...s. Global search methods based on penalty formulations normally have certain techniques to escape from the attraction of local minima or valleys in a search space. Typical methods include tabu search =-=[75, 77, 15]-=-, multistart [168, 165, 94, 191], heuristic repair methods [41], break-out strategies [139], guided local search (GLS) [201], and random walk [173]. Among all the global search methods, multi-start is... |

560 |
Constrained optimization and Lagrange multiplier methods
- Bertsekas
- 1982
(Show Context)
Citation Context ...p(x) is the penalty term. A widely used penalty term is: p(x) = n # i=1 w i |h i (x)|, (2.2) where w i are weight coe#cients to be determined. A simple solution is to use a static-penalty formulation =-=[22, 128]-=- that sets w i to be static large positive values. This way, a local minimum of eval(x) is a constrained local minimum (CLM dn ), and a global minimum of eval(x) is a constrained global minimum (CGM d... |

548 | Pushing the envelope: planning, propositional logic, and stochastic search
- Kautz, Selman
- 1996
(Show Context)
Citation Context ...26.86 670 ### NR NR NR bw-large-d 10/10 146.28 10/10 254.80 937 ### NR NR NR *: Results from [173] for similar but not the same problems in the DIMACS archive **: Results from [171] ***: Results from =-=[116]-=- 177 0 0.2 0.4 0.6 0.8 1 0 2e+06 4e+06 6e+06 8e+06 1e+07 1.2e+07 1.4e+07 PDF Number of Flips Figure 6.7: Distribution of number of flips for applying DLM-Distance-Penalty-SAT to solve benchmark proble... |

505 | Primal-Dual Interior-Point Methods
- Wright
- 1997
(Show Context)
Citation Context ...ed continuous minimization problems using only local information. Typical methods include gradient descent, conjugate gradient methods, Newton descent [128, 144, 158], and barrier or interior methods =-=[58, 144, 224, 225]-=-. Since they may get stuck in local minima and their solution quality is heavily dependent on their starting points, they are often used as components of other global-search or global-optimization met... |

494 |
Global Optimization Using Interval Analysis
- Hansen
- 1992
(Show Context)
Citation Context ...ained NLPs A general continuous constrained NLP problem is defined in (1.1) in which x is a vector of continuous variables. Active research in the past three decades has produced a variety of methods =-=[196, 107, 58, 91, 144, 133]-=- to solve the general constrained continuous minimization problem defined in (1.1). Based on di#erent problem formulations, existing methods can be classified into three categories: penalty formulatio... |

403 | GRASP: A Search Algorithm for Propositional Satisfiability
- Silva
(Show Context)
Citation Context ...-Penalty has not found any solution to par32-?-c, as denoted by `-' in the table. ("NR" in the table stands for "not reported.") . . . . . . . . . . . . . . . 177 6.5 Performance c=-=omparisons of Grasp [130]-=-, DLM-Trap-Avoidance-SAT and DLMBASIC -SAT on some typical DIMACS benchmarks. The timing results of Grasp were collected on a SUN SPARC 5/85 computer. `-' stands for `not solved.' . . . 179 xv List of... |

378 | Noise Strategies for Improving Local Search
- Selman, Kautz, et al.
- 1994
(Show Context)
Citation Context ... same e#ect. Flat moves, or sideway148 moves, are allowed to better explore plateaus in the variable space. GSAT can quickly solve randomly generated 3-SAT problems with up to 2000 variables. WalkSAT =-=[174, 173, 172]-=- adds random walks (`noise') in GSAT by picking a variable in some unsatisfied clauses with probability p and flips its assignment, and by performing greedy local search (GSAT) with probability 1 - p.... |

366 |
Multiple Criteria Optimization, Theory, Computation and Application
- Steuer
- 1986
(Show Context)
Citation Context ...ural optimization, neuralnetwork design, VLSI design, database design and processing, nuclear power-plant design and operation, mechanical engineering, physical sciences, and chemical-process control =-=[144, 193, 17, 47, 56]-=-. A general goal in solving nonlinear constrained optimization problems is to find feasible solutions that satisfy all the constraints. This is not an easy task because nonlinear constrained optimizat... |

361 |
An overview of evolutionary algorithms for parameter optimization
- Bäck, Schwefel
- 1993
(Show Context)
Citation Context ...or solving discrete constrained NLPs without any transformation on their objective and constraint functions can be classified into two approaches. One major approach is based on rejecting, discarding =-=[8, 9, 150]-=- or repairing [114] methods in order to avoid infeasible points. This approach, however, has di#culty in handling nonlinear constraints whose feasible regions may be very hard to locate, leading to mo... |

288 |
H.(1996) Global Optimization - Deterministic Approaches
- Horst, Tuy
(Show Context)
Citation Context ...ained NLPs A general continuous constrained NLP problem is defined in (1.1) in which x is a vector of continuous variables. Active research in the past three decades has produced a variety of methods =-=[196, 107, 58, 91, 144, 133]-=- to solve the general constrained continuous minimization problem defined in (1.1). Based on di#erent problem formulations, existing methods can be classified into three categories: penalty formulatio... |

286 | Local search strategies for satisfiability testing. Cliques, coloring, and satisfiability: Second DIMACS implementation challenge 26
- SELMAN, KAUTZ, et al.
- 1993
(Show Context)
Citation Context ...cepts discussed in this section: constant f(x), N 2 (x), and periodic reduction of all Lagrange multipliers by a common factor. In addition, it uses heuristics based on tabu lists [75] and flat moves =-=[173]-=-. We explain each step of this algorithm when we present our proposed trap-avoidance strategy in the next section. Table 6.1 lists the average performance of our current implementation of DLM-BASICSAT... |

252 |
Partitioning Procedures for Solving Mixed-Variables Programming Problems
- Benders
- 1962
(Show Context)
Citation Context ...ay that after 51 fixing a subset of the variables, the resulting subproblem is convex and can be solved easily. There are three classes of these algorithms. a) Generalized Benders Decomposition (GBD) =-=[56, 71, 21]-=- is used to solve a subclass of constrained MINLPs under some convexity assumptions. For example, it requires the continuous subspace to be a nonempty and convex set and the objective and constraint f... |

245 | Evolutionary algorithms for constrained parameter optimization problems
- Michalewicz, Schoenauer
- 1996
(Show Context)
Citation Context ...ns to the process that sacrifice the global optimality of solutions have been developed [117, 129]. Various constraint handling techniques have been developed based on dynamic-penalty formulations in =-=[99, 113, 133, 148, 134, 76, 8, 170, 145, 169]-=-. Besides requiring domainspecific knowledge, most of these heuristics have di#culties in finding feasible regions or in maintaining feasibility for nonlinear constraints and get stuck easily in local... |

244 | The reactive tabu search
- BATTITI, TECCHIOLLI
- 1994
(Show Context)
Citation Context ... small local region in their search space. Global-search methods, on the other hand, have techniques for escaping from the attraction of local minima or constrained local minima in their search space =-=[77, 15, 168, 165, 94, 191, 173, 201]-=-, thereby having a better chance to find high-quality solutions. Last, global-optimization methods can find global optima for unconstrained problems or constrained global optima for constrained proble... |

239 | A survey of evolution strategies
- Back, Hoffmeister, et al.
- 1991
(Show Context)
Citation Context ...ns to the process that sacrifice the global optimality of solutions have been developed [117, 129]. Various constraint handling techniques have been developed based on dynamic-penalty formulations in =-=[99, 113, 133, 148, 134, 76, 8, 170, 145, 169]-=-. Besides requiring domainspecific knowledge, most of these heuristics have di#culties in finding feasible regions or in maintaining feasibility for nonlinear constraints and get stuck easily in local... |

226 | Global Optimization
- Törn, Z̆ilinskas
- 1989
(Show Context)
Citation Context ...in, in the di#culty in modeling accurately a search space determined by nonlinear objective and constraints functions and in the high cost of applying them to problems with more than twenty variables =-=[196]-=-. In general, global search methods based on penalty formulations can at best achieve constrained local minima (CLM dn ), given large penalties on constraints. However, as mentioned before, selecting ... |

224 | Domain-independent extensions to GSAT: Solving large structured satisfiability problems
- Selman, Kautz
- 1993
(Show Context)
Citation Context ...ing eval(x) meets the goal of locating problem solutions. The weakness of GLS, however, is that features are very problem specific and a poorly chosen feature may be detrimental to GLS. A random walk =-=[173, 172]-=- performs greedy local descents most of the time and perturbs, occasionally, one or several variables in order to bring the search out of local traps. A probability,sp, is used to govern the percentag... |

208 |
An introduction to simulated evolutionary optimization
- Fogel
- 1994
(Show Context)
Citation Context ... an optimal solution with high probabilities. Further, because SA allows up-hill moves, it is generally not as e#cient as local search methods that only accept down-hill moves. Genetic algorithm (GA) =-=[80, 59, 133, 161, 142, 146, 93], a typica-=-l global optimization algorithm with reachability, roots itself in nature's rule of "fitness to survive." As summarized in [135, 133, 132], a genetic algorithm has five basic components: a) ... |

208 |
The breakout method for escaping from local minima
- Morris
- 1993
(Show Context)
Citation Context ...from the attraction of local minima or valleys in a search space. Typical methods include tabu search [75, 77, 15], multistart [168, 165, 94, 191], heuristic repair methods [41], break-out strategies =-=[139]-=-, guided local search (GLS) [201], and random walk [173]. Among all the global search methods, multi-start is the most straightforward approach to get out of local minima. The method works as follows.... |

202 |
Modeling genetic algorithms with markov chains
- Nix, Vose
- 1992
(Show Context)
Citation Context ... an optimal solution with high probabilities. Further, because SA allows up-hill moves, it is generally not as e#cient as local search methods that only accept down-hill moves. Genetic algorithm (GA) =-=[80, 59, 133, 161, 142, 146, 93], a typica-=-l global optimization algorithm with reachability, roots itself in nature's rule of "fitness to survive." As summarized in [135, 133, 132], a genetic algorithm has five basic components: a) ... |

197 |
Introduction to Global Optimization
- Horst, Pardalos, et al.
- 2000
(Show Context)
Citation Context ...d global optima for constrained problems when the methods stop [118, 132, 19, 147, 30, 232, 213, 214]. . Deterministic versus stochastic search methods. An iterative search procedure is deterministic =-=[106, 235]-=- if each probe is generated deterministically by the procedure. Otherwise, it is called a probabilistic or stochastic procedure. . Complete versus incomplete search methods. Complete methods have mech... |

192 | Convergence analysis of canonical genetic algorithms
- Rudolph
- 1994
(Show Context)
Citation Context |

188 |
Adapting operator Probabilities in genetic algorithms
- Davis
- 1989
(Show Context)
Citation Context ...(0, #) stands for a random Gaussian distribution with a zero mean and a standard deviation of #. A possible crossover operator produces two o#springs, x # (n+1) and y # (n+1), by a linear combination =-=[44]-=- of two parents x(n) and y(n) using: x # (n + 1) = #x(n) + (1 - #)y(n) (2.15) y # (n + 1) = (1 - #)x(n) + #y(n), (2.16) where # is in the range (0, 1). Obviously, using a Gaussian mutation operator an... |

168 | Improvements to propositional satisfiability search algorithms
- Freeman
- 1995
(Show Context)
Citation Context ...ly, we compare our algorithms to Grasp [130], one of the best complete algorithms. Since Grasp performs the best on most DIMACS benchmarks [130] when compared to other complete methods, such as POSIT =-=[62]-=-, CSAT [49], H2R [152] and DPL---a recent implementation of the Davis-Putnam procedure [14], we compare our results with respect to Grasp only. Since Grasp is a complete method that can prove unsatisf... |

154 |
An outer-approximation algorithm for a class of mixed-integer nonlinear programs
- Duran, Grossmann
- 1986
(Show Context)
Citation Context ...it can be shown that the sequence of upper bounds are non-increasing, that the sequence of lower bounds are non-decreasing, and that the sequences converge in finite time. b) Outer Approximation (OA) =-=[51, 50]-=- is similar to GBD except that it formulates the master problem using primal information and outer linearization. With similar restrictions as GBD, it requires the continuous subspace to be a nonempty... |

143 |
Some guidelines for genetic algorithms with penalty functions
- Richardson, Palmer, et al.
(Show Context)
Citation Context ...rch to achieve the goal of finding feasible solutions. Global Search methods introduce techniques to overcome local minima. Typical global search methods include rejecting methods, discarding methods =-=[156, 150]-=-, repair methods [114, 143] and preserving feasibility [134, 76]. Rejecting and discarding methods have been discussed in Section 2.1.3. Typical repair methods have some techniques to transform or rep... |

142 |
Generalized Benders Decomposition
- Geoffrion
- 1972
(Show Context)
Citation Context ...ay that after 51sfixing a subset of the variables, the resulting subproblem is convex and can be solved easily. There are three classes of these algorithms. a) Generalized Benders Decomposition (GBD) =-=[56, 71, 21]-=- is used to solve a subclass of constrained MINLPs under some convexity assumptions. For example, it requires the continuous subspace to be a nonempty and convex set and the objective and constraint f... |

142 |
Lagrangian Relaxation for Integer programming
- Geoffrion
- 1974
(Show Context)
Citation Context ...ar, and inaccurate bounds may lead to incorrect pruning and infeasible solutions when the algorithm terminates. 2.1.4 Lagrangian Relaxation There is a class of algorithms called Lagrangian relaxation =-=[72, 74, 64, 180, 16]-=- proposed in the literature that should not be confused with our proposed discrete constrained optimization method using Lagrange multipliers. Lagrangian relaxation reformulates a linear integer minim... |

138 | Enhancing an algorithm for set covering problems
- Beasley, J¿rnsten
- 1992
(Show Context)
Citation Context ... chance to find high-quality solutions. Last, global-optimization methods can find global optima for unconstrained problems or constrained global optima for constrained problems when the methods stop =-=[118, 132, 19, 147, 30, 232, 213, 214]-=-. . Deterministic versus stochastic search methods. An iterative search procedure is deterministic [106, 235] if each probe is generated deterministically by the procedure. Otherwise, it is called a p... |

135 | MIMIC: Finding Optima by Estimating Probability Densities
- Bonet, Isbell, et al.
- 1997
(Show Context)
Citation Context ...in and is rederived after each generation. Note that the probabilistic model used in PBIL does not explore any inter-parameter dependency. Mutual information maximization for input clustering (MIMIC) =-=[25]-=- analyzes the global structure of a variable space, uses knowledge of this structure to guide a randomized search in the variable space, and refines the estimation of the structure using new informati... |

131 |
CUTE: constrained and unconstrained testing environment
- Toint
- 1995
(Show Context)
Citation Context ...l on a comprehensive set of constrained NLP benchmarks: G1 thru G10 developed in the GA community [135, 121], all the 29 Floudas and Pardalos' benchmarks in [57], and all nonlinear problems from CUTE =-=[26]-=-, a constrained and unconstrained testing environment. These problems have objective functions of various types (linear, quadratic, cubic, polynomial, and nonlinear) and linear/nonlinear constraints o... |

128 |
A Collection of Test Problems for Constrained Global Optimization Algorithms
- Floudas, Pardalos
- 1990
(Show Context)
Citation Context .... . . . . . . . . . . . . 80 4.1 E#ects of static and dynamic weights on convergence time and solution quality from 20 randomly generated starting points for the discretized version of Problem 2.6 in =-=[57]-=-. (Weight w is the initial weight in the dynamic case.) . . . . . . . . . . . . . 90 xi 4.2 Performance comparison of DLM-General and CSA in solving discrete constrained NLPs derived from continuous c... |

125 |
Handbook of Global Optimization
- Horst, Pardalos
- 1995
(Show Context)
Citation Context ...straints whose feasible regions may be very hard to locate, leading to mostly infeasible points generated that are rejected. The other approach is based on enumeration or randomized search techniques =-=[105]-=-. Enumerative algorithms [125, 221] belong to the class of complete methods that utilize branch29 and-bound techniques to find lower bounds of linearized constraints. In these algorithms, branching va... |

122 |
Minimization by random search techniques
- Solis, Wets
- 1981
(Show Context)
Citation Context ...ive) random search algorithm [232] works as follows: Generate a trial point p k , Advance the search by setting x k+1 = # # # p k , if p k is accepted x k , if p k is rejected. (2.9) It was proved in =-=[186]-=- that under some generic conditions, a sequential random search method can asymptotically converge with probability one to a global minimum. Note that the way of generating and accepting trial points ... |

121 | Sequential quadratic programming
- Boggs, Tolle
- 1995
(Show Context)
Citation Context ...solving constrained minimization problems. These [22, 128] include the first-order method, Newton's method, modified Newton's methods, quasi-Newton methods, and sequential quadratic programming (SQP) =-=[24, 48, 108]-=-. A major advantage of these methods is that solving the first-order conditions matches exactly the goal of locating a CLM cn . Therefore, these algorithms are usually e#cient for solving continuous c... |

117 |
Exact reconstruction techniques for tree-structured subband coders
- Smith, Barnwell
- 1986
(Show Context)
Citation Context ...h respect to a reference design. The specific measures constrained may be application- and filter-dependent [198]. Constraint-based methods have been applied to design QMF banks in both the frequency =-=[112, 36, 43, 120, 183, 187]-=- and time domains [140, 185]. In the frequency domain, the most often considered objectives are E r (reconstruction error) and # s (stopband ripple) . As stopband ripples cannot be formulated in close... |

111 | A Davis-Putnam based enumeration algorithm for linear pseudo-boolean optimization
- Barth
- 1995
(Show Context)
Citation Context ...cision formulations , defined in (6.1), entail the search of solutions that can satisfy all the clauses. Existing methods in this class are generally complete 147 methods. Examples include resolution =-=[14, 157, 68, 45]-=-, backtracking [154], and consistency testing [84, 89]. Due to the exhaustive nature of these search methods, they are expensive to use and normally have di#culty to address large-size problems. There... |

109 |
Real-Coded Genetic Algorithms and Interval Schemata, in: Foundations of Genetic Algorithms 2
- Eshelman, Schaffer
- 1993
(Show Context)
Citation Context ...re a solution is represented by a string of binary bits (00011010010 for example). However, for many real-world problems, it is di#cult and ine#cient to use a binary representation. It has been found =-=[54]-=- that real-number encoding performs better than binary or Gray encoding for function and constraint optimization. The reason is that the topological structure of the coding space for a real-number enc... |

108 | On the use of nonstationary penalty functions to solve non-linear constrained optimization problems with GAs - Joines, Houck - 1994 |

105 |
Efficient local search for very large-scale satisfiability problems. Sigart Bulletin 3(1):8–12
- Gu
- 1992
(Show Context)
Citation Context ... methods may be trapped by local minima in the objective space, various global-search strategies have been proposed. Next, we discuss briefly some existing methods using unconstrained formulations Gu =-=[87, 190, 86, 85]-=- proposed a number of local search and parallel local search for solving SAT problems. The various strategies proposed include iterative perturbation of trajectories and randomized search for overcomi... |