Results 1  10
of
151
Fast Linear Iterations for Distributed Averaging
 Systems and Control Letters
, 2003
"... We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear ..."
Abstract

Cited by 429 (13 self)
 Add to MetaCart
(Show Context)
We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear iteration can be cast as a semidefinite program, and therefore efficiently and globally solved. These optimal linear iterations are often substantially faster than several common heuristics that are based on the Laplacian of the associated graph.
Fastest mixing markov chain on a graph
 SIAM Review
"... Author names in alphabetical order. Submitted to SIAM Review, problems and techniques section. We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibr ..."
Abstract

Cited by 157 (16 self)
 Add to MetaCart
(Show Context)
Author names in alphabetical order. Submitted to SIAM Review, problems and techniques section. We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e., the mixing rate of the Markov chain, is determined by the second largest (in magnitude) eigenvalue of the transition matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the second largest magnitude eigenvalue, i.e., the problem of ¯nding the fastest mixing Markov chain on the graph. We show that this problem can be formulated as a convex optimization problem, which can in turn be expressed as a semide¯nite program (SDP). This allows us to easily compute the (globally) fastest mixing Markov chain for any graph with a modest number of edges (say, 1000) using standard numerical methods for SDPs. Larger problems can be solved by
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 110 (12 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
Complete search in continuous global optimization and constraint satisfaction
 ACTA NUMERICA 13
, 2004
"... ..."
Computing the Nearest Correlation Matrix  a Problem From Finance
, 2002
"... Introduction A correlation matrix is a symmetric positive semidefinite matrix with unit diagonal. Correlation matrices occur in several areas of numerical linear algebra, including preconditioning of linear systems and error analysis of Jacobi methods for the symmetric eigenvalue problem (see Davie ..."
Abstract

Cited by 99 (0 self)
 Add to MetaCart
(Show Context)
Introduction A correlation matrix is a symmetric positive semidefinite matrix with unit diagonal. Correlation matrices occur in several areas of numerical linear algebra, including preconditioning of linear systems and error analysis of Jacobi methods for the symmetric eigenvalue problem (see Davies & Higham (2000) for details and references). The term `correlation matrix' comes from statistics, since a matrix whose (i, j ) entry is the correlation coefficient between two random variables x i and x j is symmetric positive semidefinite with unit diagonal. It is a statistical application that motivates this workone coming from the finance industry. In stock research sample correlation matrices constructed from vectors of stock returns are used for predictive purposes. Unfortunately, on any day when an observation is made data are rarely available for all the stocks of interest. One way to deal with this problem is to compute the sample correlations of pairs of stocks using data draw
Alternating direction augmented Lagrangian methods for semidefinite programming
, 2009
"... Abstract. We present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a twosplitting scheme, minimizes the dual augmented Lagrangian function sequentially ..."
Abstract

Cited by 64 (2 self)
 Add to MetaCart
Abstract. We present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a twosplitting scheme, minimizes the dual augmented Lagrangian function sequentially with respect to the Lagrange multipliers corresponding to the linear constraints, then the dual slack variables and finally the primal variables, while in each minimization keeping the other variables fixed. Convergence is proved by using a fixedpoint argument. A multiplesplitting algorithm is then proposed to handle SDPs with inequality constraints and positivity constraints directly without transforming them to the equality constraints in standard form. Finally, numerical results for frequency assignment, maximum stable set and binary integer quadratic programming problems are presented to demonstrate the robustness and efficiency of our algorithm.
Regularization methods for semidefinite programming
 SIAM JOURNAL ON OPTIMIZATION
, 2009
"... We introduce a new class of algorithms for solving linear semidefinite programming (SDP) problems. Our approach is based on classical tools from convex optimization such as quadratic regularization and augmented Lagrangian techniques. We study the theoretical properties and we show that practical im ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
We introduce a new class of algorithms for solving linear semidefinite programming (SDP) problems. Our approach is based on classical tools from convex optimization such as quadratic regularization and augmented Lagrangian techniques. We study the theoretical properties and we show that practical implementations behave very well on some instances of SDP having a large number of constraints. We also show that the “boundary point method” from [PRW06] is an instance of this class.
On the complexity of Putinar’s Positivstellensatz
, 2005
"... Abstract. Let S = {x ∈ R n  g1(x) ≥ 0,..., gm(x) ≥ 0} be a basic closed semialgebraic set defined by real polynomials gi. Putinar’s Positivstellensatz says that, under a certain condition stronger than compactness of S, every real polynomial f positive on S posesses a representation f = ∑ m i=0 ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
Abstract. Let S = {x ∈ R n  g1(x) ≥ 0,..., gm(x) ≥ 0} be a basic closed semialgebraic set defined by real polynomials gi. Putinar’s Positivstellensatz says that, under a certain condition stronger than compactness of S, every real polynomial f positive on S posesses a representation f = ∑ m i=0 σigi where g0: = 1 and each σi is a sum of squares of polynomials. Such a representation is a certificate for the nonnegativity of f on S. We give a bound on the degrees of the terms σigi in this representation which depends on the description of S, the degree of f and a measure of how close f is to having a zero on S. As a consequence, we get information about the convergence rate of Lasserre’s procedure for optimization of a polynomial subject to polynomial constraints. 1.
Detecting a Definite Hermitian Pair and a Hyperbolic or Elliptic Quadratic Eigenvalue Problem, and Associated Nearness Problems
, 2001
"... An important class of generalized eigenvalue problems Ax = Bx is those in which A and B are Hermitian and some real linear combination of them is definite. For the quadratic eigenvalue problem (QEP) ( 2 A+B+C)x = 0 with Hermitian A, B and C and positive denite A, particular interest focuses on pr ..."
Abstract

Cited by 34 (12 self)
 Add to MetaCart
An important class of generalized eigenvalue problems Ax = Bx is those in which A and B are Hermitian and some real linear combination of them is definite. For the quadratic eigenvalue problem (QEP) ( 2 A+B+C)x = 0 with Hermitian A, B and C and positive denite A, particular interest focuses on problems in which (x Bx) 2 4(x Ax)(x Cx) is onesigned for all nonzero xfor the positive sign these problems are called hyperbolic and for the negative sign elliptic. The important class of overdamped problems arising in mechanics is a subclass of the hyperbolic problems. For each of these classes of generalized and quadratic eigenvalue problems we show how to check that a putative member has the required properties and we derive the distance to the nearest problem outside the class. For definite pairs (A; B) the distance is the Crawford number, and we derive bisection and level set algorithms both for testing its positivity and for computing it. Testing hyperbolicity of a QEP is shown to reduce to testing a related pair for deniteness. The distance to the nearest nonhyperbolic or nonelliptic nn QEP is shown to be the solution of a global minimization problem with n 1 degrees of freedom. Numerical results are given to illustrate the theory and algorithms.