Results 1 
7 of
7
An Efficient Algorithm for Minimizing a Sum of PNorms
 SIAM Journal on Optimization
, 1997
"... We study the problem of minimizing a sum of pnorms where p is a fixed real number in the interval [1; 1]. Several practical algorithms have been proposed to solve this problem. However, none of them has a known polynomial time complexity. In this paper, we transform the problem into standard conic ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
We study the problem of minimizing a sum of pnorms where p is a fixed real number in the interval [1; 1]. Several practical algorithms have been proposed to solve this problem. However, none of them has a known polynomial time complexity. In this paper, we transform the problem into standard conic form. Unlike those in most convex optimization problems, the cone for the pnorm problem is not selfdual unless p = 2. Nevertheless, we are able to construct two logarithmically homogeneous selfconcordant barrier functions for this problem. The barrier parameter of the first barrier function does not depend on p. The barrier parameter of the second barrier function increases with p. Using both barrier functions, we present a primaldual potential reduction algorithm to compute an ffloptimal solution in polynomial time that is independent of p. Computational experiences of a Matlab implementation are also reported. Key words. Shortest network, Steiner minimum trees, facilities location, po...
A Lagrangian Dual Method with SelfConcordant Barriers for MultiStage Stochastic Convex Nonlinear Programming
, 1999
"... . This paper presents an algorithm for solving multistage stochastic convex nonlinear programs. The algorithm is based on the Lagrangian dual method which relaxes the nonanticipativity constraints, and the barrier function method which enhances the smoothness of the dual objective function so that ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
. This paper presents an algorithm for solving multistage stochastic convex nonlinear programs. The algorithm is based on the Lagrangian dual method which relaxes the nonanticipativity constraints, and the barrier function method which enhances the smoothness of the dual objective function so that the Newton search directions can be used. The algorithm is shown to be of global convergence and of polynomialtime complexity. Keywords: Multistage stochastic nonlinear programming, Lagrangian dual, Selfconcordant barrier, Interior Point Methods, Polynomialtime Complexity. The research is partially supported by Grant RP972685 of National University of Singapore. 1 Introduction In this paper we propose an algorithm for multistage stochastic convex nonlinear programming (MSSCNP). In contrast with the twostage SP, the multistage SP not only inherently has more scenarios but also more complicated structures of scenario trees. Therefore, the multistage SP is much more difficult to solve...
Proving Strong Duality for Geometric Optimization Using a Conic Formulation
, 1999
"... Geometric optimization is an important class of problems that has many applications, especially in engineering design. In this article, we provide new simplified proofs for the wellknown associated duality theory, using conic optimization. After introducing suitable convex cones and studying their ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Geometric optimization is an important class of problems that has many applications, especially in engineering design. In this article, we provide new simplified proofs for the wellknown associated duality theory, using conic optimization. After introducing suitable convex cones and studying their properties, we model geometric optimization problems with a conic formulation, which allows us to apply the powerful duality theory of conic optimization and derive the duality results valid for geometric optimization.
Notes on Duality in Second Order and POrder Cone Optimization
, 2000
"... Recently, the socalled second order cone optimization problem has received much attention, because the problem has many applications and the problem can at least in theory be solved eciently by interiorpoint methods. In this note we treat duality for second order cone optimization problems and in ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Recently, the socalled second order cone optimization problem has received much attention, because the problem has many applications and the problem can at least in theory be solved eciently by interiorpoint methods. In this note we treat duality for second order cone optimization problems and in particular whether a nonzero duality gap can be introduced when casting a convex quadratically constrained optimization problem as a second order cone optimization problem. Furthermore, we also discuss the porder cone optimization problem which is a natural generalization of the second order case. Specically, we suggest a new selfconcordant barrier for the porder cone optimization problem. 1 Introdution The second order cone optimization problem can be stated as (SOCP) minimize f T x subject to jjA i x b i jj c i: x d i ; i = 1; : : : ; k; Hx = h: where A i 2 R (m i 1)n and H 2 R ln and all the other quantities have conforming dimensions. c i: denotes the ith row of ...
Improving Complexity of Structured Convex Optimization Problems Using SelfConcordant Barriers
, 2001
"... The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to pr ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to provide the best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy and which one is the best to fix. A lemma from [3] is improved, which allows us to review several classes of structured convex optimization problems and improve the corresponding complexity results.
A Conic Formulation for L P Norm Optimization
, 2000
"... In this paper, we formulate the l p norm optimization problem as a conic optimization problem, derive its standard duality properties and show it can be solved in polynomial time. We first define an ad hoc closed convex cone L p , study its properties and derive its dual. This allows us to express ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we formulate the l p norm optimization problem as a conic optimization problem, derive its standard duality properties and show it can be solved in polynomial time. We first define an ad hoc closed convex cone L p , study its properties and derive its dual. This allows us to express the standard l p norm optimization primal problem as a conic problem involving L p . Using convex conic duality and our knowledge about L p , we proceed to derive the dual of this problem and prove the wellknown regularity properties of this primaldual pair, i.e. zero duality gap and primal attainment. Finally, we prove that the class of l p norm optimization problems can be solved up to a given accuracy in polynomial time, using the framework of interiorpoint algorithms and selfconcordant barriers.
SelfConcordant Functions in Structured Convex Optimization
, 2000
"... This paper provides a selfcontained introduction to the theory of selfconcordant functions [8] and applies it to several classes of structured convex optimization problems. We describe the classical shortstep interiorpoint method and optimize its parameters to provide its best possible iteration ..."
Abstract
 Add to MetaCart
This paper provides a selfcontained introduction to the theory of selfconcordant functions [8] and applies it to several classes of structured convex optimization problems. We describe the classical shortstep interiorpoint method and optimize its parameters to provide its best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy, how they react to addition and scaling and which one is the best to fix. A lemma from [2] is improved and allows us to review several classes of structured convex optimization problems and evaluate their algorithmic complexity, using the selfconcordancy of the associated logarithmic barriers.