Results 11  20
of
37
On secondorder optimality conditions for nonlinear programming
 Optimization
"... A new SecondOrder condition is given, which depends on a weak constant rank constraint requirement. We show that practical and publicly available algorithms (www.ime.usp.br/∼egbirgin/tango) of Augmented Lagrangian type converge, after slight modifications, to stationary points defined by the new co ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A new SecondOrder condition is given, which depends on a weak constant rank constraint requirement. We show that practical and publicly available algorithms (www.ime.usp.br/∼egbirgin/tango) of Augmented Lagrangian type converge, after slight modifications, to stationary points defined by the new condition.
Infeasibility Detection and SQP Methods for Nonlinear Optimization
, 2008
"... This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty paramet ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees no matter if a problem is feasible or infeasible. We present an activeset sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter appropriately to emphasize optimality over feasibility, or vice versa. Conditions are presented under which superlinear convergence is achieved in the infeasible case. Numerical experiments illustrate the practical behavior of the method.
A Filter ActiveSet TrustRegion Method
, 2007
"... 2.1 Sequential LinearQuadratic Programming Methods.............. 3 2.2 Difficulties with the LP/TR Step Computation................ 4 ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
2.1 Sequential LinearQuadratic Programming Methods.............. 3 2.2 Difficulties with the LP/TR Step Computation................ 4
A Line Search Exact Penalty Method Using Steering Rules
, 2009
"... Line search algorithms for nonlinear programming must include safeguards to enjoy global convergence properties. This paper describes an exact penalization approach that extends the class of problems that can be solved with line search SQP methods. In the new algorithm, the penalty parameter is adju ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Line search algorithms for nonlinear programming must include safeguards to enjoy global convergence properties. This paper describes an exact penalization approach that extends the class of problems that can be solved with line search SQP methods. In the new algorithm, the penalty parameter is adjusted at every iteration to ensure sufficient progress in linear feasibility and to promote acceptance of the step. A trust region is used to assist in the determination of the penalty parameter (but not in the step computation). It is shown that the algorithm enjoys favorable global convergence properties. Numerical experiments illustrate the behavior of the algorithm on various difficult situations. 1
An InteriorPoint Algorithm for LargeScale Nonlinear Optimization with Inexact
 Step Computations, SIAMJournalonScientificComputing
"... Abstract. We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primaldual search direction is allowed to be an inexact solution of a given quadratic optimization subprobl ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primaldual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the search direction (i.e., inexact subproblem solution) must satisfy so that global convergence of the algorithm for solving the nonlinear problem is guaranteed. The algorithm can be viewed as a globally convergent inexact Newtonbased method. The results of numerical experiments are provided to illustrate the reliability and efficiency of the proposed numerical method.
Conservative Scales in Packing Problems
"... Packing problems (sometimes also called cutting problems) are combinatorial optimization problems concerned with placement of objects (items) in one or several containers. Some packing problems are special cases of several other problems such as resourceconstrained scheduling, capacitated vehicle r ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Packing problems (sometimes also called cutting problems) are combinatorial optimization problems concerned with placement of objects (items) in one or several containers. Some packing problems are special cases of several other problems such as resourceconstrained scheduling, capacitated vehicle routing, etc. In this paper we consider a bounding technique for one and higherdimensional orthogonal packing problems, called conservative scales (CS) (in the scheduling terminology, redundant resources). CS are related to the possible structure of resource consumption: filling of a bin, distribution of the resource to the jobs, etc. In terms of packing, CS are modified item sizes such that the set of feasible packings is not reduced. In fact, every CS represents a valid inequality for a certain binary knapsack polyhedron. CS correspond to dual variables of the setpartitioning model of a special 1D cuttingstock problem. Some CS can be constructed by (datadependent) dualfeasible functions ((D)DFFs). We discuss the relation of CS to DFFs: CS assume that at most 1 copy of each object can appear in a combination, whereas DFFs allow several copies. The literature has investigated socalled extremal maximal DFFs (EMDFFs) which should provide very strong CS. Analogously, we introduce the notions of maximal CS (MCS) and extremal maximal CS (EMCS) and show that EMDFFs do not necessarily produce (E)MCS. We propose fast greedy methods to “maximize ” a given CS. Using the fact that EMCS define facets of the binary knapsack polyhedron, we use lifted cover inequalities as EMCS. For higherdimensional orthogonal packing, we propose a Sequential LP (SLP) method over the set of CS and investigate its convergence. Numerical results are presented.
ON THE CONVERGENCE OF AN ACTIVE SET METHOD FOR ℓ1 MINIMIZATION
"... Abstract. We analyze an abridged version of the activeset algorithm FPC AS proposed in [18] for solving the l1regularized problem, i.e., a weighted sum of the l1norm ‖x‖1 and a smooth function f(x). The active set algorithm alternatively iterates between two stages. In the first “nonmonotone line ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. We analyze an abridged version of the activeset algorithm FPC AS proposed in [18] for solving the l1regularized problem, i.e., a weighted sum of the l1norm ‖x‖1 and a smooth function f(x). The active set algorithm alternatively iterates between two stages. In the first “nonmonotone line search (NMLS) ” stage, an iterative firstorder method based on “shrinkage” is used to estimate the support at the solution. In the second “subspace optimization ” stage, a smaller smooth problem is solved to recover the magnitudes of the nonzero components of x. We show that NMLS itself is globally convergent and the convergence rate is at least Rlinearly. In particular, NMLS is able to identify of the zero components of a stationary point after a finite number of steps under some mild conditions. The global convergence of FPC AS is established based on the properties
Nonlinear programming without a penalty function
 Mathematical Programming
, 2002
"... a filter ..."
COMPARISON AND AUTOMATED SELECTION OF LOCAL OPTIMIZATION SOLVERS FOR INTERVAL GLOBAL OPTIMIZATION METHODS ∗
"... Abstract. We compare six stateoftheart local optimization solvers with focus on their efficiency when invoked within an intervalbased global optimization algorithm. For comparison purposes we design three special performance indicators: a solution check indicator (measuring whether the local min ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We compare six stateoftheart local optimization solvers with focus on their efficiency when invoked within an intervalbased global optimization algorithm. For comparison purposes we design three special performance indicators: a solution check indicator (measuring whether the local minimizers found are good candidates for nearoptimal verified feasible points), a function value indicator (measuring the contribution to the progress of the global search), and the running time indicator (estimating the computational cost of the local search within the global search). The solvers are compared on the COCONUT Environment test set consisting of 1307 problems. Our main target is to predict the behavior of the solvers in terms of the three performance indicators on a new problem. For this we introduce a knearest neighbor method applied over a feature space consisting of several categorical and numerical features of the optimization problems. The quality and robustness of the prediction is demonstrated by various quality measurements with detailed comparative tests. In particular, we found that on the test set we are able to pick a ‘best ’ solver in 66–89 % of the cases and avoid picking all ‘useless ’ solvers in 95–99 % of the cases (when a useful alternative exists). The resulting automated solver selection method is implemented as an inference engine of the COCONUT Environment.