Results 1  10
of
17
An effective implementation of the linkernighan traveling salesman heuristic
 European Journal of Operational Research
, 2000
"... This report describes an implementation of the LinKernighan heuristic, one of the most successful methods for generating optimal or nearoptimal solutions for the symmetric traveling salesman problem. Computational tests show that the implementation is highly effective. It has found optimal solution ..."
Abstract

Cited by 120 (1 self)
 Add to MetaCart
This report describes an implementation of the LinKernighan heuristic, one of the most successful methods for generating optimal or nearoptimal solutions for the symmetric traveling salesman problem. Computational tests show that the implementation is highly effective. It has found optimal solutions for all solved problem instances we have been able to obtain, including a 7397city problem (the largest nontrivial problem instance solved to optimality today). Furthermore, the algorithm has improved the best known solutions for a series of largescale problems with unknown optima, among these an 85900city problem. 1.
Convergence of a Simple Subgradient Level Method
 Math. Programming
, 1998
"... We study the subgradient projection method for convex optimization with Brannlund 's level control for estimating the optimal value. We establish global convergence in objective values without additional assumptions employed in the literature. Key words. Nondifferentiable optimization, subgrad ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study the subgradient projection method for convex optimization with Brannlund 's level control for estimating the optimal value. We establish global convergence in objective values without additional assumptions employed in the literature. Key words. Nondifferentiable optimization, subgradient optimization. 1 Introduction We consider a method for the minimization problem f = inf S f under the following assumptions. S is a nonempty closed convex set in IR n , f : IR n ! IR is a convex function, for each x 2 S we can compute f(x) and a subgradient g f (x) 2 @f(x) of f at x, and for each x 2 IR n we can find P S x = arg min y2S jx \Gamma yj, its orthogonal projection on S, where j \Delta j is the Euclidean norm. The optimal set Arg min S f may be empty. Given the kth iterate x k 2 S and a target level f k lev that estimates f , we may use H k = n x : f(x k ) + D g k ; x \Gamma x k E f k lev o with g k = g f (x k ) 2 @f(x k ) (1.1) to approximate t...
On a modified subgradient algorithm for dual problems via sharp augmented Lagrangian
 Journal of Global Optimization
, 2006
"... We study convergence properties of a modified subgradient algorithm, applied to the dual problem defined by the sharp augmented Lagrangian. The primal problem we consider is nonconvex and nondifferentiable, with equality constraints. We obtain primal and dual convergence results, as well as a condit ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We study convergence properties of a modified subgradient algorithm, applied to the dual problem defined by the sharp augmented Lagrangian. The primal problem we consider is nonconvex and nondifferentiable, with equality constraints. We obtain primal and dual convergence results, as well as a condition for existence of a dual solution. Using a practical selection of the stepsize parameters, we demonstrate the algorithm and its advantages on test problems, including an integer programming and an optimal control problem. Key words: Nonconvex programming; nonsmooth optimization; augmented Lagrangian; sharp Lagrangian; subgradient optimization.
Improving traditional subgradient scheme for Lagrangean relaxation: an application to location problems
 International Journal of Mathematical Algorithms
, 1999
"... Lagrangean relaxation is largely used to solve combinatorial optimization problems. A known problem for Lagrangean relaxation application is the definition of convenient step size control in subgradient like methods. Even preserving theoretical convergence properties, a wrong defined control can ref ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Lagrangean relaxation is largely used to solve combinatorial optimization problems. A known problem for Lagrangean relaxation application is the definition of convenient step size control in subgradient like methods. Even preserving theoretical convergence properties, a wrong defined control can reflect in performance and increase computational times, a critical point in large scale instances. We show in this work how to accelerate a classical subgradient method, using the local information of the surrogate constraints relaxed in the Lagrangean relaxation. It results in a onedimensional search that corrects the step size and is independent of the step size control used. The application to Capacitated and Uncapacitated Facility Location problems is shown. Several computational tests confirm the superiority of this scheme. Key words: Location problems, Lagrangean relaxation, Subgradient method. 1. Introduction Facility location is the problem of locating a number of facilities from a s...
An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗
, 2008
"... We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.
Exact and Heuristic Approaches for Assignment in MultipleContainer Packing
, 1997
"... This paper deals with cutting/packing problems in which there is a set of pieces to be allocated and arranged in a set of "containers." In an apparel manufacturing application, the containers might be unused areas of the fabric after large pieces have been placed, and the pieces of interes ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper deals with cutting/packing problems in which there is a set of pieces to be allocated and arranged in a set of "containers." In an apparel manufacturing application, the containers might be unused areas of the fabric after large pieces have been placed, and the pieces of interest might be the smaller pieces. In a sheet metal application, the containers could be the sheets themselves, and the pieces the entire set of pieces to be arranged. The specific problem addressed takes as input a set of groups (of pieces), and mappings from pieces to groups and groups to containers. The method in which the groups are generated and the particular geometric constraints (e.g., translation only, or translation plus rotation) is not critical for the methods developed here. This paper presents an integer programming formulation of the multiplecontainer group assignment problem (MCGAP). Based on long and/or highly variable solution times for some problem instances, a Lagrangian heuristic pro...
An InfeasiblePoint Subgradient Method Using Adaptive Approximate Projections ⋆
"... Abstract. We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorit ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyaktype step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum ℓ1norm solution to an underdetermined linear system, an important problem in Compressed Sensing.
Fast and Low Complexity Blind Equalization via Subgradient Projections
, 2005
"... We propose a novel blind equalization method based on subgradient search over a convex cost surface. This is an alternative to the existing iterative blind equalization approaches such as the Constant Modulus Algorithm (CMA) which often suffer from the convergence problems caused by their nonconvex ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose a novel blind equalization method based on subgradient search over a convex cost surface. This is an alternative to the existing iterative blind equalization approaches such as the Constant Modulus Algorithm (CMA) which often suffer from the convergence problems caused by their nonconvex cost functions. The proposed method is an iterative algorithm, (called SubGradient based Blind Algorithm (SGBA) ) for both real and complex constellations, with a very simple update rule. It is based on the minimization of the l ∞ norm of the equalizer output under a linear constraint on the equalizer coefficients using subgradient iterations. The algorithm has a nice convergence behavior attributed to the convex l ∞ cost surface as well as the step size selection rules associated with the subgradient search. We illustrate the performance of the algorithm using examples with both complex and real constellations, where we show that the proposed algorithm’s convergence is less sensitive to initial point selection, and a fast convergence behavior can be achieved with a judicious selection of step sizes. Furthermore, the amount of data required for the training of the equalizer is significantly lower than most of the existing schemes.
An Update Rule and a Convergence Result for a Penalty Function Method
, 2007
"... We use a primaldual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain ..."
Abstract
 Add to MetaCart
We use a primaldual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results. Key words: Penalty function method, penalty parameter update, least exact penalty parameter, duality, nonsmooth optimization, nonconvex optimization. Mathematical Subject Classification: 49M30; 49M29; 49M37; 90C26; 90C30. 1