Results 1  10
of
10
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity", Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations ..."
Abstract

Cited by 87 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
Smoothed analysis of Renegar’s condition number for linear programming
, 2003
"... We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2. From this bound, we obtain a smoothed analysis of Renegar’s interior point algorithm. By combining this with the smoothed analysis of finite termination Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of linear programming is O(n 3 log(nd/σ)).
An algorithm to analyze stability of geneexpression patterns

, 2002
"... Many problems in the field of computational biology consist of the analysis of socalled geneexpression data. The successful application of approximation and optimization techniques, dynamical systems, algorithms and the utilization of the underlying combinatorial structures lead to a better unders ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Many problems in the field of computational biology consist of the analysis of socalled geneexpression data. The successful application of approximation and optimization techniques, dynamical systems, algorithms and the utilization of the underlying combinatorial structures lead to a better understanding in that field. For the concrete example of geneexpression data we extend an algorithm, which exploits discrete information. This is lying in extremal points of polyhedra, which grow step by step, up to a possible stopping. We study geneexpression data in time, mathematically model it by a timecontinuous system, and timediscretize this system. By our algorithm we compute the regions of stability and instability. We give an motivating introduction from genetics, present biological and mathematical interpretations of (in)stability, point out structural frontiers and give an outlook to future research.
Line Search in Potential Reduction Algorithms for Linear Programming
, 1989
"... We describe several line search strategies in recent potential reduction algorithms for linear programming. We clarify some concerns about the step size of the original algorithm. In particular, we illustrate that the dual step of the algorithm can be sufficiently "long". We also discuss some other ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We describe several line search strategies in recent potential reduction algorithms for linear programming. We clarify some concerns about the step size of the original algorithm. In particular, we illustrate that the dual step of the algorithm can be sufficiently "long". We also discuss some other implementation issues for the algorithm. Keywords: Linear programming, line search, potential reduction algorithms. Abbreviated title: Line Search in Potential Reduction Algorithms y Department of Management Sciences, The University of Iowa, Iowa City, Iowa 52242 Since Karmarkar proposed his projective algorithm [5], various primaldual potential reduction algorithms for linear programming have been developed by Anstreicher and Bosch [1], Freund [2], Gonzaga and Todd [4], Kojima, Mizuno and Yoshise [6], Liu and Goldfarb [7], McShane, Monma and Shanno [8], and Ye [10][11] among others. All of these algorithms are based on reducing a primaldual potential function that is first appeared in T...
On the worst case complexity of potential reduction algorithms for linear programming, Working Paper 355893
 Sloan School of Management, MIT
, 1993
"... There are several classes of interior point algorithms that solve linear programming problems in O(Vin L) iterations, but it is not known whether this bound is tight for any interior point algorithm. Among interior point algorithms, several potential reduction algorithms combine both theoretical (O( ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
There are several classes of interior point algorithms that solve linear programming problems in O(Vin L) iterations, but it is not known whether this bound is tight for any interior point algorithm. Among interior point algorithms, several potential reduction algorithms combine both theoretical (O(+/E L) iterations) and practical efficiency as they allow the flexibility of line searches in the potential function and thus can lead to practical implementations. It is a significant open question whether interior point algorithms can lead to better complexity bounds. In the present paper we give some negative answers to this question for the class of potential reduction algorithms. We show that, without line searches in the potential function, the bound O(v/i L) is tight for several potential reduction algorithms, i.e., there is a class of examples, in which the algorithms need at least Q(v/i L) iterations to find an optimal solution. In addition, we show that for a class of potential functions, even if we allow line searches in the potential function, the bounds are still tight. We note that it is the first time that tight bounds are obtained for any interior point algorithm. III 1
Anticipated Behavior of LongStep Algorithms for Linear Programming
"... : We provide a probabilistic analysis of the second order term that arises in pathfollowing algorithms for linear programming. We use this result to show that two such methods, algorithms generating a sequence of points in a neighborhood of the central path and in its relaxation, require a worstcas ..."
Abstract
 Add to MetaCart
: We provide a probabilistic analysis of the second order term that arises in pathfollowing algorithms for linear programming. We use this result to show that two such methods, algorithms generating a sequence of points in a neighborhood of the central path and in its relaxation, require a worstcase number of iterations that is O(nL) and an anticipated number of iterations that is O(log(n)L). The second neighborhood spreads almost all over the feasible region so that the generated points are close to the boundary rather than the central path. We also Research was supported in part by GrantinAid 63490010 for General Scientific Research of the Ministry of Education, Science and Culture, Japan. y Research was supported in part by NSF grant DMS8904406 and ONR contract N0001487K0212. propose a potential reduction algorithm which requires the same order of number of iterations as the pathfollowing algorithms. Key words: Linear Programming, interior point algorithms, pathfoll...
SEMIDEFINITE PROGRAMMING*
"... Abstract. In sernidefinite programming, one minimizes a linear function subject to the constraint that an affine combination of synunetric matrices is positive semidefinite. Such a constraint is nonlinear and nonsmooth, but convex, so semidefinite programs are convex optimization problems. Semidefin ..."
Abstract
 Add to MetaCart
Abstract. In sernidefinite programming, one minimizes a linear function subject to the constraint that an affine combination of synunetric matrices is positive semidefinite. Such a constraint is nonlinear and nonsmooth, but convex, so semidefinite programs are convex optimization problems. Semidefinite programming unifies several standard problems (e.g., linear and quadratic programming) and finds many applications in engineering and combinatorial optimization. Although semidefinite programs are much more general than linear programs, they are not much harder to solve. Most interiorpoint methods for linear programming have been generalized to semidefinite programs. As in linear programming, these methods have polynomial worstcase complexity and perform very well in practice. This paper gives a survey of the theory and applications of semidefinite programs and an introduction to primaldual interiorpoint methods for their solution. Key words, semidefinite programming, convex optimization, interiorpoint methods, eigenvalue optimization, combinatorial optimization, system and control theory AMS subject classifications. 65K05, 49M45, 93B51, 90C25, 90C27, 90C90, 15A18 1. Introduction. 1.1. Semidefinite programming. We consider the problem of minimizing a linear function
Smoothed Analysis of InteriorPoint Algorithms: Condition Number
, 2003
"... A linear program is typically specified by a matrix A together with two vectors b and c, where A is an nbyd matrix, b is an nvector and c is a dvector. There are several canonical forms for defining a linear program using (A,b,c). One commonly used canonical form is: max c T x s.t. Ax ≤ b and it ..."
Abstract
 Add to MetaCart
A linear program is typically specified by a matrix A together with two vectors b and c, where A is an nbyd matrix, b is an nvector and c is a dvector. There are several canonical forms for defining a linear program using (A,b,c). One commonly used canonical form is: max c T x s.t. Ax ≤ b and its dual min b T y s.t A T y = c, y ≥ 0. In [Ren95b, Ren95a, Ren94], Renegar defined the condition number C(A,b,c) of a linear program and proved that an interior point algorithm whose complexity was O(n 3 log(C(A,b,c)/ǫ)) could solve a linear program in this canonical form to relative accuracy ǫ, or determine that the program was infeasible or unbounded. In this paper, we prove that for any ( Ā, ¯ b, ¯c) such that ∥ ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1, where ∥ ∥ Ā, ¯ b, ¯c ∥ ∥
A primaldual potential reduction method for problems
, 1994
"... involving matrix inequalities ..."