Results 1  10
of
108
How bad is selfish routing?
 JOURNAL OF THE ACM
, 2002
"... We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route t ..."
Abstract

Cited by 504 (27 self)
 Add to MetaCart
We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times—the total latency—is minimized. In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimumlatency path available to it, given the network congestion caused by the other users. In general such a “selfishly motivated ” assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance. In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total
Interiorpoint Methods
, 2000
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 463 (16 self)
 Add to MetaCart
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming, monotone linear complementarity, and convex programming over sets that can be characterized by selfconcordant barrier functions.
The mathematics of eigenvalue optimization
 MATHEMATICAL PROGRAMMING
"... Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemp ..."
Abstract

Cited by 88 (11 self)
 Add to MetaCart
Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemporary optimization theory. This essay presents a personal choice of some central mathematical ideas, outlined for the broad optimization community. I discuss the convex analysis of spectral functions and invariant matrix norms, touching briefly on semidefinite representability, and then outlining two broader algebraic viewpoints based on hyperbolic polynomials and Lie algebra. Analogous nonconvex notions lead into eigenvalue perturbation theory. The last third of the article concerns stability, for polynomials, matrices, and associated dynamical systems, ending with a section on robustness. The powerful and elegant language of nonsmooth analysis appears throughout, as a unifying narrative thread.
Approximate Minimum Enclosing Balls in High Dimensions Using CoreSets
, 2003
"... this paper can be downloaded from http://www.compgeom.com/meb/. P. Kumar and J. Mitchell are partially supported by a grant from the National Science Foundation (CCR0098172) . J. Mitchell is also partially supported by grants from the Honda Fundamental Research Labs, Metron Aviation, NASAAmes Resear ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
this paper can be downloaded from http://www.compgeom.com/meb/. P. Kumar and J. Mitchell are partially supported by a grant from the National Science Foundation (CCR0098172) . J. Mitchell is also partially supported by grants from the Honda Fundamental Research Labs, Metron Aviation, NASAAmes Research (NAG21325), and the USIsrael Binational Science Foundation. E. A. Yldrm is partially supported by an NSF CAREER award (DMI0237415)
Solving Standard Quadratic Optimization Problems Via Linear, Semidefinite and Copositive Programming
 J. Global Optim
, 2001
"... The problem of minimizing a (nonconvex) quadratic function over the simplex (the standard quadratic optimization problem) has an exact convex reformulation as a copositive programming problem. In this paper we show how to approximate the optimal solution by approximating the cone of copositive matr ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
The problem of minimizing a (nonconvex) quadratic function over the simplex (the standard quadratic optimization problem) has an exact convex reformulation as a copositive programming problem. In this paper we show how to approximate the optimal solution by approximating the cone of copositive matrices via systems of linear inequalities, and, more rened, linear matrix inequalities (LMI's). In particular, we show that our approach leads to a polynomialtime approximation scheme for the standard quadratic optimization problem. This is an improvement on the previous complexity result by Nesterov [10] (that a 2=3approximation is always possible). Numerical examples from various applications are provided to illustrate our approach, which extends ideas of De Klerk and Pasechnik [5] for the maximal stable set problem in a graph. Keywords: Approximation algorithms, stability number, semidenite programming, copositive cone, standard quadratic optimization 1
Tractable approximations of robust conic optimization problems
, 2006
"... In earlier proposals, the robust counterpart of conic optimization problems exhibits a lateral increase in complexity, i.e., robust linear programming problems (LPs) become second order cone problems (SOCPs), robust SOCPs become semidefinite programming problems (SDPs), and robust SDPs become NPha ..."
Abstract

Cited by 33 (11 self)
 Add to MetaCart
In earlier proposals, the robust counterpart of conic optimization problems exhibits a lateral increase in complexity, i.e., robust linear programming problems (LPs) become second order cone problems (SOCPs), robust SOCPs become semidefinite programming problems (SDPs), and robust SDPs become NPhard. We propose a relaxed robust counterpart for general conic optimization problems that (a) preserves the computational tractability of the nominal problem; specifically the robust conic optimization problem retains its original structure, i.e., robust LPs remain LPs, robust SOCPs remain SOCPs and robust SDPs remain SDPs, and (b) allows us to provide a guarantee on the probability that the robust solution is feasible when the uncertain coefficients obey independent and identically distributed normal distributions.
On the closedness of the linear image of a closed convex cone
, 1992
"... informs doi 10.1287/moor.1060.0242 ..."
A New Condition Measure, PreConditioners, and Relations between Different Measures of Conditioning for Conic Linear Systems
, 2001
"... In recent years, a body of research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be relevant in studying the efficiency of algorithms (including interiorpoint algorithms) for convex ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
In recent years, a body of research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be relevant in studying the efficiency of algorithms (including interiorpoint algorithms) for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FP d ): Ax = b; x 2 CX , whose data is d = (A; b). We present a new measure of conditioning, denoted d , and we show implications of d for problem geometry and algorithm complexity, and demonstrate that the value of = d is independent of the speci c data representation of (FP d ). We then prove certain relations among a variety of condition measures for (FP d ), including d , d , d , and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we introduce the notion of a "preconditioner" for (FP d ) which results in an equivalent formulation (FP ~ d ) of (FP d ) with a better condition number C( ~ d). We characterize the best such preconditioner and provide an algorithm and complexity analysis for constructing an equivalent data instance ~ d whose condition number C( ~ d) is within a known factor of the best possible.
Complexity of Convex Optimization Using GeometryBased Measures and a Reference Point
, 2002
"... Our concern lies in solving the following convex optimization problem: G P : minimize x c where P is a closed convex subset of the ndimensional vector space X. We bound the complexity of computing an almostoptimal solution of G P in terms of natural geometrybased measures of the feasible ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
Our concern lies in solving the following convex optimization problem: G P : minimize x c where P is a closed convex subset of the ndimensional vector space X. We bound the complexity of computing an almostoptimal solution of G P in terms of natural geometrybased measures of the feasible region and the levelset of almostoptimal solutions, relative to a given that might be close to the feasible region and/or the almostoptimal level set. This contrasts with other complexity bounds for convex optimization that rely on databased condition numbers or algebraic measures, and that do not take into account any a priori reference point information. AMS Subject Classification: 90C, 90C05, 90C60 Keywords: Convex Optimization, Complexity, InteriorPoint Method, Barrier Method This research has been partially supported through the SingaporeMIT Alliance. Portions of this research were undertaken when the author was a Visiting Scientist at Delft University of Technology.