Results 1  10
of
12
Polynomiality of PrimalDual Affine Scaling Algorithms for Nonlinear Complementarity Problems
, 1995
"... This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primaldual affine scaling algorithms generates an approximate solution (given a precision ffl) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1=ffl) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in [13].
Logarithmic Barrier Decomposition Methods for SemiInfinite Programming
, 1996
"... A computational study of some logarithmic barrier decomposition algorithms for semiinfinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier cutting plane algorithm which was presented recently by den Hertog et al., to solv ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
A computational study of some logarithmic barrier decomposition algorithms for semiinfinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier cutting plane algorithm which was presented recently by den Hertog et al., to solve semiinfinite programming problems. Usually decomposition (cutting plane methods) use cutting planes to improve the localization of the given problem. In this paper we propose an extension which uses linear cuts to solve large scale, difficult real world problems. This algorithm uses both static and (doubly) dynamic enumeration of the parameter space and allows for multiple cuts to be simultaneously added for larger/difficult problems. The algorithm is implemented both on sequential and parallel computers. Implementation issues and parallelization strategies are discussed and encouraging computational results are presented. Keywords: column generation, convex programming, cutting plane met...
Ambiguous risk measures and optimal robust portfolios
 SIAM Journal on Optimization
"... Abstract. This paper deals with a problem of guaranteed (robust) financial decisionmaking under model uncertainty. An efficient method is proposed for determining optimal robust portfolios of risky financial instruments in the presence of ambiguity (uncertainty) on the probabilistic model of the re ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper deals with a problem of guaranteed (robust) financial decisionmaking under model uncertainty. An efficient method is proposed for determining optimal robust portfolios of risky financial instruments in the presence of ambiguity (uncertainty) on the probabilistic model of the returns. Specifically, it is assumed that a nominal discrete return distribution is given, while the true distribution is only known to lie within a distance d from the nominal one, where the distance is measured according to the Kullback–Leibler divergence. The goal in this setting is to compute portfolios that are worstcase optimal in the meanrisk sense, that is, to determine portfolios that minimize the maximum with respect to all the allowable distributions of a weighted riskmean objective. The analysis in the paper considers both the standard variance measure of risk and the absolute deviation measure.
On The Complexity Of A Practical InteriorPoint Method
"... The theory of selfconcordance has been used to analyze the complexity of interiorpoint methods based on Newton's method. For large problems, it may be impractical to use Newton's method; here we analyze a truncatedNewton method, in which an approximation to the Newton search direction is ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The theory of selfconcordance has been used to analyze the complexity of interiorpoint methods based on Newton's method. For large problems, it may be impractical to use Newton's method; here we analyze a truncatedNewton method, in which an approximation to the Newton search direction is used. In addition, practical interiorpoint methods often include enhancements such as extrapolation that are absent from the theoretical algorithms analyzed previously. We derive theoretical results that apply to such an algorithm, an algorithm similar to a sophisticated computer implementation of a barrier method. The results for a single barrier subproblem are a satisfying extension of the results for Newton's method. When extrapolation is used in the overall barrier method, however, our results are more limited. We indicate (by both theoretical arguments and examples) why more elaborate results may be difficult to obtain.
Global Linear And Local Quadratic Convergence Of A LongStep AdaptiveMode Interior Point Method For Some Monotone Variational Inequality Problems
, 1996
"... . An interior point method is proposed to solve variational inequality problems for monotone functions and polyhedral sets. The method has the following advantages. 1. Given an initial interior feasible solution with duality gap ¯ 0 , the algorithm requires at most O[n log(¯ 0 =ffl)] iterations to ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
. An interior point method is proposed to solve variational inequality problems for monotone functions and polyhedral sets. The method has the following advantages. 1. Given an initial interior feasible solution with duality gap ¯ 0 , the algorithm requires at most O[n log(¯ 0 =ffl)] iterations to obtain an ffloptimal solution. 2. The rate of convergence of the duality gap is qquadratic. 3. At each iteration, a longstep improvement based on a line search is allowed. 4. The algorithm can automatically transfer from a linear mode to a quadratic mode to accelerate the local convergence. Keywords: Polynomial Complexity of Algorithms, Interior Point Methods, Monotone Variational Inequality Problems, Rate of Convergence. 1 The research is partially supported by Grant RP930033 of National University of Singapore. 2 Department of Decision Sciences. Email: fbasunj@nus.sg. 3 Department of Mathematics. Email: matzgy@nus.sg. 1 Introduction Given a function F : IR n ! IR n and a nonem...
A SUPERLINEARLY CONVERGENT ALGORITHM FOR LARGE SCALE MULTISTAGE STOCHASTIC NONLINEAR PROGRAMMING
, 2003
"... This paper presents an algorithm for solving a class of large scale nonlinear programming problem which is originally derived from the multistage stochastic convex nonlinear programming. Using the Lagrangiandual method and the MoreauYosida regularization, the primal problem is neatly transformed ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents an algorithm for solving a class of large scale nonlinear programming problem which is originally derived from the multistage stochastic convex nonlinear programming. Using the Lagrangiandual method and the MoreauYosida regularization, the primal problem is neatly transformed into a smooth convex problem. By introducing a selfconcordant barrier function, an approximate generalized Newton method is then designed to solve the problem. The algorithm is shown to be of superlinear convergence. Some numerical results are presented to demonstrate the viability of the proposed method.
SOLVING THE DISCRETE lpAPPROXIMATION PROBLEM BY A METHOD OF CENTERS 1
"... The discrete lpapproximation problem is a basic problem in approximation theory and optimization. This problem is normally solved by Newtontype methods which are complicated by the nondifferentiability of the gradient function for p ∈ [1, 2). This paper discusses a scheme and its implementation fo ..."
Abstract
 Add to MetaCart
(Show Context)
The discrete lpapproximation problem is a basic problem in approximation theory and optimization. This problem is normally solved by Newtontype methods which are complicated by the nondifferentiability of the gradient function for p ∈ [1, 2). This paper discusses a scheme and its implementation for solving this problem by a method of analytic centers, which provides a unified treatment for all p ≥ 1 and has a polynomial bound of complexity. The original problem is reformulated as a constrained convex programming problem which admits a selfconcordant logarithmic barrier. A special structure of the Newton system derived from minimizing the potential function is utilized to reduce the amount of computation. Some other implementation techniques are also presented. Computational results show that the method of centers is robust for all p.
Interior Point Methods for Conelinear Optimization Solvability, Modeling and Engineering Applications
"... Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. Large classes of CLO problems are solvable efficiently by using modern interior point methods based software. Moreover, CLO allows to model many engineering optimization pro ..."
Abstract
 Add to MetaCart
Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. Large classes of CLO problems are solvable efficiently by using modern interior point methods based software. Moreover, CLO allows to model many engineering optimization problems in a novel way. This paper provides a brief survey of the most important classes of CLO problems, provides useful information about their solvability and available software. Finally, to illustrate the applicability of CLO, a novel approach to robust optimization is discussed. 2. Keywords Conelinear optimization, semidefinite optimization, robust optimization, engineering design. 3. ConeLinear Optimization Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. A primaldual pair of CLO problems can be given as (P ) min c T x s:t: Ax \Gamma b 2 C1 x 2 C2 (D) max b T y s:t: c \Gamma A T y 2 C 2 y ...