Results 1  10
of
14
Simulated Annealing with Asymptotic Convergence for Nonlinear Constrained Global Optimization
 Principles and Practice of Constraint Programming
, 1999
"... In this paper, we present constrained simulated annealing (CSA), a global minimization algorithm that converges to constrained global minima with probability one, for solving nonlinear discrete nonconvex constrained minimization problems. The algorithm is based on the necessary and sufficient condit ..."
Abstract

Cited by 34 (18 self)
 Add to MetaCart
In this paper, we present constrained simulated annealing (CSA), a global minimization algorithm that converges to constrained global minima with probability one, for solving nonlinear discrete nonconvex constrained minimization problems. The algorithm is based on the necessary and sufficient condition for constrained local minima in the theory of discrete Lagrange multipliers we developed earlier. The condition states that the set of discrete saddle points is the same as the set of constrained local minima when all constraint functions are nonnegative.
Simulated Annealing with Extended Neighbourhood
, 1991
"... Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems for which little prior knowledge is available. It can produce very high quality solutions for hard combinatorial optimization problems. However, the computation time required by SA is very large. V ..."
Abstract

Cited by 21 (14 self)
 Add to MetaCart
Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems for which little prior knowledge is available. It can produce very high quality solutions for hard combinatorial optimization problems. However, the computation time required by SA is very large. Various methods have been proposed to reduce the computation time, but they mainly deal with the careful tuning of SA's control parameters. This paper first analyzes the impact of SA's neighbourhood on SA's performance and shows that SA with a larger neighbourhood is better than SA with a smaller one. The paper also gives a general model of SA, which has both dynamic generation probability and acceptance probability, and proves its convergence. All variants of SA can be unified under such a generalization. Finally, a method of extending SA's neighbourhood is proposed, which uses a discrete approximation to some continuous probability function as the generation function in SA, and several impo...
Global Optimization For Constrained Nonlinear Programming
, 2001
"... In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for constrained local minima (CLM dn ) in the theory of discrete constrained optimization using Lagrange multipliers developed in our group. The theory proves the equivalence between the set of discrete saddle points and the set of CLM dn, leading to the firstorder necessary and sufficient condition for CLM dn. To find
Solving Nonlinear Constrained Optimization Problems Through Constraint Partitioning
, 2005
"... In this dissertation, we propose a general approach that can significantly reduce the complexity in solving discrete, continuous, and mixed constrained nonlinear optimization (NLP) problems. A key observation we have made is that most applicationbased NLPs have structured arrangements of constrai ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
In this dissertation, we propose a general approach that can significantly reduce the complexity in solving discrete, continuous, and mixed constrained nonlinear optimization (NLP) problems. A key observation we have made is that most applicationbased NLPs have structured arrangements of constraints. For example, constraints in AI planning are often localized into coherent groups based on their corresponding subgoals. In engineering design problems, such as the design of a power plant, most constraints exhibit a spatial structure based on the layout of the physical components. In optimal control applications, constraints are localized by stages or time. We have developed techniques to exploit these constraint structures by partitioning the constraints into subproblems related by global constraints. Constraint partitioning leads to much relaxed subproblems that are significantly easier to solve. However, there exist global constraints relating multiple subproblems that must be resolved. Previous methods cannot exploit such structures using constraint partitioning because they cannot resolve inconsistent global constraints efficiently.
The Theory And Applications Of Discrete Constrained Optimization Using Lagrange Multipliers
, 2000
"... In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated firstorder search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixedinteger space. The constrained problems are general in the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated firstorder search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixedinteger space. The constrained problems are general in the sense that they do not assume the differentiability or convexity of functions. Our proposed theory and methods are targeted at discrete problems and can be extended to continuous and mixedinteger problems by coding continuous variables using a floatingpoint representation (discretization). We have characterized the errors incurred due to such discretization and have proved that there exists upper bounds on the errors. Hence, continuous and mixedinteger constrained problems, as well as discrete ones, can be handled by DLM in a unified way with bounded errors.
A Modified Simulated Annealing Algorithm for the Quadratic Assignment Problem
, 2003
"... Abstract. The quadratic assignment problem (QAP) is one of the wellknown combinatorial optimization problems and is known for its various applications. In this paper, we propose a modified simulated annealing algorithm for the QAP – MSAQAP. The novelty of the proposed algorithm is an advanced for ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. The quadratic assignment problem (QAP) is one of the wellknown combinatorial optimization problems and is known for its various applications. In this paper, we propose a modified simulated annealing algorithm for the QAP – MSAQAP. The novelty of the proposed algorithm is an advanced formula of calculation of the initial and final temperatures, as well as an original cooling schedule with oscillation, i.e., periodical decreasing and increasing of the temperature. In addition, in order to improve the results obtained, the simulated annealing algorithm is combined with a tabu search approach based algorithm. We tested our algorithm on a number of instances from the library of the QAP instances – QAPLIB. The results obtained from the experiments show that the proposed algorithm appears to be superior to earlier versions of the simulated annealing for the QAP. The power of MSAQAP is also corroborated by the fact that the new best known solution was found for the one of the largest QAP instances – THO150. Key words: heuristics, local search, simulated annealing, quadratic assignment problem. 1.
Simulated Annealing for Communication Network Reliability Improvement
, 1995
"... A communication network is composed by a set of centers which transmit and receive data, and a set of links which transport this data. To evaluate the capacity of a communication network architecture to resist to the possible failures of some of its components, several reliability metrics are curren ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
A communication network is composed by a set of centers which transmit and receive data, and a set of links which transport this data. To evaluate the capacity of a communication network architecture to resist to the possible failures of some of its components, several reliability metrics are currently used.
QuasiStatically Cooled Markov Chains
, 1994
"... We consider timeinhomogeneous Markov chains on a finite statespace, whose transition probabilities p ij (t) = c ij ffl(t) V ij are proportional to powers of a vanishing small parameter ffl(t). We determine the precise relationship between this chain, and the corresponding timehomogeneous cha ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider timeinhomogeneous Markov chains on a finite statespace, whose transition probabilities p ij (t) = c ij ffl(t) V ij are proportional to powers of a vanishing small parameter ffl(t). We determine the precise relationship between this chain, and the corresponding timehomogeneous chains p ij = c ij ffl V ij , as ffl & 0. Let f ffl i g be the steadystate distribution of this timehomogeneous chain. We characterize the orders fj i g in ffl i = \Theta(ffl j i ). We show that if ffl(t) & 0 slowly enough, then the timewise occupation measures fi i := supfq ? 0 j P 1 t=1 ffl(t) q Prob(x(t) = i) = +1g, called the recurrence orders, satisfy fi i \Gamma fi j = j j \Gamma j i . Moreover, if G := fj i j j i = min j j j g is the set of "ground states" of the timehomogeneous chain, then x(t) ! G, in an appropriate sense, whenever j(t) is "cooled" slowly. We also show that there exists a critical ae such that x(t) ! G if and only if P 1 t=1 ffl(t) ae = +...
Thermodynamic Formalism Of Neural Computing
, 1995
"... Neural networks are systems of interconnected processors mimicking some of the brain functions. After a rapid overview of neural computing, the thermodynamic formalism of the learning procedure is introduced. Besides its use in introducing efficient stochastic learning algorithms, it gives an insigh ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Neural networks are systems of interconnected processors mimicking some of the brain functions. After a rapid overview of neural computing, the thermodynamic formalism of the learning procedure is introduced. Besides its use in introducing efficient stochastic learning algorithms, it gives an insight in terms of information theory. Main emphasis is given in the information restitution process; stochastic evolution is used as the starting point for introducing statistical mechanics of associative memory. Instead of formulating problems in their most general setting, it is preferred stating precise results on specific models. In this report are mainly presented those features that are relevant when the neural net becomes very large. A survey of the most recent results is given and the main open problems are pointed out.