Results 1  10
of
27
Stochastic shortest paths via quasiconvex maximization
 PROCEEDINGS OF EUROPEAN SYMPOSIUM OF ALGORITHMS
, 2006
"... We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact n Θ(log n) algorithm for the case of normally ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact n Θ(log n) algorithm for the case of normally distributed edge lengths, which is based on quasiconvex maximization. We then prove average and smoothed polynomial bounds for this algorithm, which also translate to average and smoothed bounds for the parametric shortest path problem, and extend to a more general nonconvex optimization setting. We also consider a number other edge length distributions, giving a range of exact and approximation schemes.
Smoothed analysis: an attempt to explain the behavior of algorithms in practice
 COMMUN. ACM
, 2009
"... Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to random perturbation and modification in their formation. A concrete example of such a smoothed analysis is a proof that the simplex algorithm for linear programming usually runs in polynomial time, when its input is subject to modeling or measurement noise.
Efficient Algorithms Using The Multiplicative Weights Update Method
, 2006
"... Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more eff ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact. In this thesis, we explore applications of the Multiplicative Weights method in the design of efficient algorithms for various optimization problems. This method, which was repeatedly discovered in quite diverse fields, is an algorithmic technique which maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution. We present a single metaalgorithm which unifies all known applications of this method in a common framework. Next, we generalize the method to the setting of symmetric matrices rather than real numbers. We derive the following applications of the resulting Matrix Multiplicative Weights algorithm: 1. The first truly general, combinatorial, primaldual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for obtaining O(plog n) approximations to various graph partitioning problems, such as Sparsest Cut, Balanced Separator in both directed and undirected weighted graphs, and constraint satisfaction problems such as Min UnCut and Min 2CNF Deletion. 2. An ~O(n3) time derandomization of the AlonRoichman construction of expanders using Cayley graphs. The algorithm yields a set of O(log n) elements which generates an expanding Cayley graph in any group of n elements. 3. An ~O(n3) time deterministic O(log n) approximation algorithm for the quantum hypergraph covering problem. 4. An alternative proof of a result of Aaronson that the flfatshattering dimension of quantum states on n qubits is O ( nfl2).
BlackBox Randomized Reductions in Algorithmic Mechanism Design
"... Abstract—We give the first blackbox reduction from arbitrary approximation algorithms to truthful approximation mechanisms for a nontrivial class of multiparameter problems. Specifically, we prove that every packing problem that admits an FPTAS also admits a truthfulinexpectation randomized mech ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We give the first blackbox reduction from arbitrary approximation algorithms to truthful approximation mechanisms for a nontrivial class of multiparameter problems. Specifically, we prove that every packing problem that admits an FPTAS also admits a truthfulinexpectation randomized mechanism that is an FPTAS. Our reduction makes novel use of smoothed analysis, by employing small perturbations as a tool in algorithmic mechanism design. We develop a “duality” between linear perturbations of the objective function of an optimization problem and of its feasible set, and use the “primal ” and “dual ” viewpoints to prove the running time bound and the truthfulness guarantee, respectively, for our mechanism.
On the Hardness and Smoothed Complexity of QuasiConcave Minimization
"... In this paper, we resolve the smoothed and approximative complexity of lowrank quasiconcave minimization, providing both upper and lower bounds. As an upper bound, we provide the first smoothed analysis of quasiconcave minimization. The analysis is based on a smoothed bound for the number of extr ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we resolve the smoothed and approximative complexity of lowrank quasiconcave minimization, providing both upper and lower bounds. As an upper bound, we provide the first smoothed analysis of quasiconcave minimization. The analysis is based on a smoothed bound for the number of extreme points of the projection of the feasible polytope onto a kdimensional subspace, where k is the rank (informally, the dimension of nonconvexity) of the quasiconcave function. Our smoothed bound is polynomial in the original dimension of the problem n and the perturbation size ρ, and it is exponential in the rank of the function k. From this, we obtain the first randomized fully polynomialtime approximation scheme for lowrank quasiconcave minimization under broad conditions. In contrast with this, we prove log nhardness of approximation for general quasiconcave minimization. This shows that our smoothed bound is essentially tight, in that no polynomial smoothed bound is possible for quasiconcave functions of general rank k. The tools that we introduce for the smoothed analysis may be of independent interest. All previous smoothed analyses of polytopes analyzed projections onto twodimensional subspaces and studied them using trigonometry to examine the angles between vectors and 2planes in R n. In this paper, we provide what is, to our knowledge, the first smoothed analysis of the projection of polytopes onto higherdimensional subspaces. To do this, we replace the trigonometry with tools from random matrix theory and differential geometry on the Grassmannian. Our hardness reduction is based on entirely different proofs that may also be of independent interest: we show that the stochastic 2stage minimum spanning tree problem has a supermodular objective and that su
Recent progress and open problems in algorithmic convex geometry
"... This article is a survey of developments in algorithmic convex geometry over the past decade. These include algorithms for sampling, optimization, integration, rounding and learning, as well as mathematical tools such as isoperimetric and concentration inequalities. Several open problems and conject ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This article is a survey of developments in algorithmic convex geometry over the past decade. These include algorithms for sampling, optimization, integration, rounding and learning, as well as mathematical tools such as isoperimetric and concentration inequalities. Several open problems and conjectures are discussed on the way.
PROJECTIVE RENORMALIZATION FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR System
, 2007
"... In this paper we study the homogeneous conic system F: Ax =0, x ∈ C \{0}. We choose a point ¯s ∈ intC ∗ that serves as a normalizer and consider computational properties of the normalized system F¯s: Ax = 0, ¯s T x =1, x ∈ C. We show that the computational complexity of solving F via an interiorpo ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper we study the homogeneous conic system F: Ax =0, x ∈ C \{0}. We choose a point ¯s ∈ intC ∗ that serves as a normalizer and consider computational properties of the normalized system F¯s: Ax = 0, ¯s T x =1, x ∈ C. We show that the computational complexity of solving F via an interiorpoint method depends only on the complexity value ϑ of the barrier for C and on the symmetry of the origin in the image set H¯s: = {Ax: ¯s T x =1, x ∈ C}, where the symmetry of 0 in H¯s is sym(0,H¯s):=max{α: y ∈ H¯s ⇒−αy ∈ H¯s}. We show that a solution of F can be computed in O ( √ ϑ ln(ϑ/sym(0,H¯s)) interiorpoint iterations. In order to improve the theoretical and practical computation of a solution of F, we next present a general theory for projective renormalization of the feasible region F¯s and the image set H¯s and prove the existence of a normalizer ¯s such that sym(0,H¯s) ≥ 1/m provided that F has an interior solution. We develop a methodology for constructing a normalizer ¯s such that sym(0,H¯s) ≥ 1/m with high probability, based on sampling on a geometric random walk with associated probabilistic complexity analysis. While such a normalizer is not itself computable in stronglypolynomialtime, the normalizer will yield a conic system that is solvable in O ( √ ϑ ln(mϑ)) iterations, which is stronglypolynomialtime. Finally, we implement this methodology on randomly generated homogeneous linear programming feasibility problems, constructed to be poorly behaved. Our computational results indicate that the projective renormalization methodology holds the promise to markedly reduce the overall computation time for conic feasibility problems; for instance we observe a 46 % decrease in average IPM iterations for 100 randomly generated poorlybehaved problem instances of dimension 1000 × 5000.
Some problems in asymptotic convex geometry and random matrices motivated by numerical algorithms
 Proceedings of the conference on Banach Spaces and their applications in analysis (in honor of N. Kalton’s 60th birthday
"... Abstract. The simplex method in Linear Programming motivates several problems of asymptotic convex geometry. We discuss some conjectures and known results in two related directions – computing the size of projections of high dimensional polytopes and estimating the norms of random matrices and their ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The simplex method in Linear Programming motivates several problems of asymptotic convex geometry. We discuss some conjectures and known results in two related directions – computing the size of projections of high dimensional polytopes and estimating the norms of random matrices and their inverses. 1. Asyptotic convex geometry and Linear Programming Linear Programming studies the problem of maximizing a linear functional subject to linear constraints. Given an objective vector z ∈ R d and constraint vectors a1,...,an ∈ R d, we consider the linear program (LP) maximize 〈z, x〉 subject to 〈ai, x 〉 ≤ 1, i = 1,...,n. This linear program has d unknowns, represented by x, and n constraints. Every linear program can be reduced to this form by a simple interpolation argument [36]. The feasible set of the linear program is the polytope P: = {x ∈ R d: 〈ai, x 〉 ≤ 1, i = 1,..., n}. The solution of (LP) is then a vertex of P. We can thus look at (LP) from a geometric viewpoint: for a polytope P in R d given by n faces, and for a vector z, find the vertex that maximizes the linear functional 〈z, x〉. The oldest and still the most popular method to solve this problem is the simplex method. It starts at some vertex of P and generates a walk on the edges of P toward the solution vertex. At each step, a pivot rule determines a choice of the next vertex; so there are many variants of the simplex method with different pivot rules. (We are not concerned here with how to find the initial vertex, which is a nontrivial problem in itself).
A characterization theorem and an algorithm for a convex hull problem
, 1204
"... ar ..."
(Show Context)
The Work of Daniel A. Spielman
 PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2010
"... Dan Spielman has made groundbreaking contributions in theoretical computer science and mathematical programming and his work has profound connections to the study of polytopes and convex bodies, to errorcorrecting codes, expanders, and numerical analysis. Many of Spielman’s achievements came with a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Dan Spielman has made groundbreaking contributions in theoretical computer science and mathematical programming and his work has profound connections to the study of polytopes and convex bodies, to errorcorrecting codes, expanders, and numerical analysis. Many of Spielman’s achievements came with a beautiful collaboration spanned over two decades with ShangHua Teng. This paper describes some of Spielman’s main achievements. Section 1 describes smoothed analysis of algorithms, which is a new paradigm for the analysis of algorithms introduced by Spielman and Teng. Section 2 describes Spielman and Teng’s explanation for the excellent practical performance of the simplex algorithm via smoothed analysis. Spielman and Teng’s theorem asserts that the simplex algorithm takes a polynomial number of steps for a random Gaussian perturbation of every linear programming problem. Section 3 is devoted to Spielman’s works on errorcorrecting codes and in particular his construction of lineartime encodable and decodable highrate codes based