Results 21  30
of
263
Failure of Global Convergence for a Class of Interior Point Methods for Nonlinear Programming
 Mathematical Programming
, 2000
"... Using a simple analytical example, we demonstrate that a class of interior point methods for general nonlinear programming, including some current methods, is not globally convergent. It is shown that those algorithms do produce limit points that are neither feasible nor stationary points of some ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
Using a simple analytical example, we demonstrate that a class of interior point methods for general nonlinear programming, including some current methods, is not globally convergent. It is shown that those algorithms do produce limit points that are neither feasible nor stationary points of some measure of the constraint violation, when applied to a wellposed problem. 1 Introduction Over the past decade a variety of interior point methods for nonconvex nonlinear programming (NLP) have been proposed and found to be efficient in practice (see e.g. [1][4], [6][8], [10][12]). Based on earlier work [5], these methods come in different varieties, such as primal or primaldual methods, line search or trust region methods, with different merit functions, different strategies to update the barrier parameter, etc. For some algorithms, theoretical global convergence properties have been proved. It has been shown that under certain assumptions the considered method converges to a loca...
Parallel Variable Distribution
 SIAM Journal on Optimization
, 1994
"... We present an approach for solving optimization problems in which the variables are distributed among p processors. Each processor has primary responsibility for updating its own block of variables in parallel while allowing the remaining variables to change in a restricted fashion (e. g. along a st ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
We present an approach for solving optimization problems in which the variables are distributed among p processors. Each processor has primary responsibility for updating its own block of variables in parallel while allowing the remaining variables to change in a restricted fashion (e. g. along a steepest descent, quasiNewton, or any arbitrary direction). This "forgetmenot" approach is a distinctive feature of our algorithm which has not been analyzed before. The parallelization step is followed by a fast synchronization step wherein the affine hull of the points computed by the parallel processors and the current point is searched for an optimal point. Convergence to a stationary point under continuous differentiability is established for the unconstrained case, as well as a linear convergence rate under the additional assumption of a Lipschitzian gradient and strong convexity. For problems constrained to lie in the Cartesian product of closed convex sets, convergence is establish...
Robust game theory
, 2006
"... We present a distributionfree model of incompleteinformation games, both with and without private information, in which the players use a robust optimization approach to contend with payoff uncertainty. Our “robust game” model relaxes the assumptions of Harsanyi’s Bayesian game model, and provides ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
We present a distributionfree model of incompleteinformation games, both with and without private information, in which the players use a robust optimization approach to contend with payoff uncertainty. Our “robust game” model relaxes the assumptions of Harsanyi’s Bayesian game model, and provides an alternative distributionfree equilibrium concept, which we call “robustoptimization equilibrium, ” to that of the ex post equilibrium. We prove that the robustoptimization equilibria of an incompleteinformation game subsume the ex post equilibria of the game and are, unlike the latter, guaranteed to exist when the game is finite and has bounded payoff uncertainty set. For arbitrary robust finite games with bounded polyhedral payoff uncertainty sets, we show that we can compute a robustoptimization equilibrium by methods analogous to those for identifying a Nash equilibrium of a finite game with complete information. In addition, we present computational results.
A New Bound for the Quadratic Assignment Problem Based on Convex Quadratic Programming
 MATHEMATICAL PROGRAMMING
, 1999
"... We describe a new convex quadratic programming bound for the quadratic assignment problem (QAP). The construction of the bound uses a semidefinite programming representation of a basic eigenvalue bound for QAP. The new bound dominates the wellknown projected eigenvalue bound, and appears to be comp ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
We describe a new convex quadratic programming bound for the quadratic assignment problem (QAP). The construction of the bound uses a semidefinite programming representation of a basic eigenvalue bound for QAP. The new bound dominates the wellknown projected eigenvalue bound, and appears to be competitive with existing bounds in the tradeoff between bound quality and computational effort.
Active Sets, Nonsmoothness And Sensitivity
, 2001
"... Nonsmoothness abounds in optimization, but the way it typically arises is highly structured. Nonsmooth behaviour of an objective function is usually associated, locally, with an active manifold: on this manifold the function is smooth, whereas in normal directions it is \veeshaped". Active set ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
Nonsmoothness abounds in optimization, but the way it typically arises is highly structured. Nonsmooth behaviour of an objective function is usually associated, locally, with an active manifold: on this manifold the function is smooth, whereas in normal directions it is \veeshaped". Active set ideas in optimization depend heavily on this structure. Important examples of such functions include the pointwise maximum of some smooth functions, and the maximum eigenvalue of a parametrized symmetric matrix. Among possible foundations for practical nonsmooth optimization, this broad class of \partly smooth" functions seems a promising candidate, enjoying a powerful calculus and sensitivity theory. In particular, we show under a natural regularity condition that critical points of partly smooth functions are stable: small perturbations to the function cause small movements of the critical point on the active manifold. Department of Combinatorics & Optimization, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada. Email: aslewis@math.uwaterloo.ca. Research supported by NSERC. 1 1
A Truncated PrimalInfeasible DualFeasible Network Interior Point Method
, 1994
"... . In this paper we introduce the truncated primalinfeasible dualfeasible interior point algorithm for linear programming and describe an implementation of this algorithm for solving the minimum cost network flow problem. In each iteration, the linear system that determines the search direction is ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
. In this paper we introduce the truncated primalinfeasible dualfeasible interior point algorithm for linear programming and describe an implementation of this algorithm for solving the minimum cost network flow problem. In each iteration, the linear system that determines the search direction is computed inexactly, and the norm of the resulting residual vector is used in the stopping criteria of the iterative solver employed for the solution of the system. In the implementation, a preconditioned conjugate gradient method is used as the iterative solver. The details of the implementation are described and the code, pdnet, is tested on a large set of standard minimum cost network flow test problems. Computational results indicate that the implementation is competitive with stateoftheart network flow codes. Key Words. Interior point method, linear programming, network flows, primalinfeasible dualfeasible, truncated Newton method, conjugate gradient, maximum flow, experimental test...
Programming Under Probabilistic Constraint with Discrete Random Variable
 Trends in Mathematical Programming, L. Grandinetti et
, 1998
"... Probabilistic constraint of the type P (Ax ≤ β) ≥ p is considered and it is proved that under some conditions the constraining function is quasiconcave. The probabilistic constraint is embedded into a mathematical programming problem of which the algorithmic solution is also discussed. 1 ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Probabilistic constraint of the type P (Ax ≤ β) ≥ p is considered and it is proved that under some conditions the constraining function is quasiconcave. The probabilistic constraint is embedded into a mathematical programming problem of which the algorithmic solution is also discussed. 1
The Many Facets of Linear Programming
, 2000
"... . We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction A ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
. We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction At the last Mathematical Programming Symposium in Lausanne, we celebrated the 50th anniversary of the simplex method. Here, we are at or close to several other anniversaries relating to linear programming: the sixtieth of Kantorovich's 1939 paper on "Mathematical Methods in the Organization and Planning of Production" (and the fortieth of its appearance in the Western literature) [55]; the fiftieth of the historic 0th Mathematical Programming Symposium that took place in Chicago in 1949 on Activity Analysis of Production and Allocation [64]; the fortyfifth of Frisch's suggestion of the logarithmic barrier function for linear programming [37]; the twentyfifth of the awarding of the 1975 Nobe...
On the local behavior of an interior point method for nonlinear programming
 Numerical Analysis 1997
, 1997
"... Jorge Nocedal z We study the local convergence of a primaldual interior point method for nonlinear programming. A linearly convergent version of this algorithm has been shown in [2] to be capable of solving large and di cult nonconvex problems. But for the algorithm to reach its full potential, it ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Jorge Nocedal z We study the local convergence of a primaldual interior point method for nonlinear programming. A linearly convergent version of this algorithm has been shown in [2] to be capable of solving large and di cult nonconvex problems. But for the algorithm to reach its full potential, it must converge rapidly to the solution. In this paper we describe how to design the algorithm so that it converges superlinearly on regular problems. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, successive quadratic programming.
Contributions to the theory of stochastic programming
 Mathematical Programming
, 1973
"... Two stochastic programming decision models are presented. In the rst one, we use probabilistic constraints, and constraints involving conditional expectations further incorporate penalties into the objective. The probabilistic constraint prescribes a lower bound for the probability of simultaneous o ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
Two stochastic programming decision models are presented. In the rst one, we use probabilistic constraints, and constraints involving conditional expectations further incorporate penalties into the objective. The probabilistic constraint prescribes a lower bound for the probability of simultaneous occurrence of events, the number of which can be in nite in which casestochastic processes are involved. The second one is a variant of the model: twostage programming under uncertainty, where we require the solvability of the second stage problem only with a prescribed (high) probability. The theory presented in this paper is based to a large extent on recent results of the author concerning logarithmic concave measures. 1