Results 1  10
of
39
Randomized Gossip Algorithms
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join a ..."
Abstract

Cited by 206 (5 self)
 Add to MetaCart
Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of “gossip ” algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the socalled Preferential Connectivity (PC) model.
DETERMINANT MAXIMIZATION WITH LINEAR MATRIX INEQUALITY CONSTRAINTS
"... The problem of maximizing the determinant of a matrix subject to linear matrix inequalities arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the s ..."
Abstract

Cited by 167 (18 self)
 Add to MetaCart
The problem of maximizing the determinant of a matrix subject to linear matrix inequalities arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the semidefinite programming problem. We give an overview of the applications of the determinant maximization problem, pointing out simple cases where specialized algorithms or analytical solutions are known. We then describe an interiorpoint method, with a simplified analysis of the worstcase complexity and numerical results that indicate that the method is very efficient, both in theory and in practice. Compared to existing specialized algorithms (where they are available), the interiorpoint method will generally be slower; the advantage is that it handles a much wider variety of problems.
Fastest Mixing Markov Chain on A Graph
 SIAM REVIEW
, 2003
"... We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e. the mixing rate of the Mar ..."
Abstract

Cited by 88 (15 self)
 Add to MetaCart
We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e. the mixing rate of the Markov chain, is determined by the second largest (in magnitude) eigenvalue of the transition matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the second largest magnitude eigenvalue, i.e., the problem of finding the fastest mixing Markov chain on the graph. We show that
The mathematics of eigenvalue optimization
 MATHEMATICAL PROGRAMMING
"... Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemp ..."
Abstract

Cited by 88 (11 self)
 Add to MetaCart
Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemporary optimization theory. This essay presents a personal choice of some central mathematical ideas, outlined for the broad optimization community. I discuss the convex analysis of spectral functions and invariant matrix norms, touching briefly on semidefinite representability, and then outlining two broader algebraic viewpoints based on hyperbolic polynomials and Lie algebra. Analogous nonconvex notions lead into eigenvalue perturbation theory. The last third of the article concerns stability, for polynomials, matrices, and associated dynamical systems, ending with a section on robustness. The powerful and elegant language of nonsmooth analysis appears throughout, as a unifying narrative thread.
ON THE RANK OF EXTREME MATRICES IN SEMIDEFINITE PROGRAMS AND THE MULTIPLICITY OF OPTIMAL EIGENVALUES
, 1998
"... We derive some basic results on the geometry of semidefinite programming (SDP) and eigenvalueoptimization, i.e., the minimization of the sum of the k largest eigenvalues of a smooth matrixvalued function. We provide upper bounds on the rank of extreme matrices in SDPs, and the first theoretically ..."
Abstract

Cited by 69 (1 self)
 Add to MetaCart
We derive some basic results on the geometry of semidefinite programming (SDP) and eigenvalueoptimization, i.e., the minimization of the sum of the k largest eigenvalues of a smooth matrixvalued function. We provide upper bounds on the rank of extreme matrices in SDPs, and the first theoretically solid explanation of a phenomenon of intrinsic interest in eigenvalueoptimization. In the spectrum of an optimal matrix, the kth and (k / 1)st largest eigenvalues tend to be equal and frequently have multiplicity greater than two. This clustering is intuitively plausible and has been observed as early as 1975. When the matrixvalued function is affine, we prove that clustering must occur at extreme points of the set of optimal solutions, if the number of variables is sufficiently large. We also give a lower bound on the multiplicity of the critical eigenvalue. These results generalize to the case of a general matrixvalued function under appropriate conditions.
Derivatives of Spectral Functions
, 1996
"... A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X , 1 (X) 2 (X) : : : n (X), and hence may be written f( 1 (X); 2 (X); : : : ; n (X)) for some symmetric function f . Such functions appear in a wide variety of matrix optimization problems. We ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
A spectral function of a Hermitian matrix X is a function which depends only on the eigenvalues of X , 1 (X) 2 (X) : : : n (X), and hence may be written f( 1 (X); 2 (X); : : : ; n (X)) for some symmetric function f . Such functions appear in a wide variety of matrix optimization problems. We give a simple proof that this spectral function is differentiable at X if and only if the function f is differentiable at the vector (X), and we give a concise formula for the derivative. We then apply this formula to deduce an analogous expression for the Clarke generalized gradient of the spectral function. A similar result holds for real symmetric matrices. 1 Introduction and notation Optimization problems involving a symmetric matrix variable, X say, frequently involve symmetric functions of the eigenvalues of X in the objective or constraints. Examples include the maximum eigenvalue of X, or log(det X) (for positive definite X), or eigenvalue constraints such as positive semidefinit...
Legendre Functions and the Method of Random Bregman Projections
, 1997
"... this paper, Bregman's method is studied within the powerful framework of Convex Analysis. New insights are obtained and the rich class of "Bregman/Legendre functions" is introduced. Bregman's method still works, if the underlying function is Bregman/Legendre or more generally if it is Legendre but s ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
this paper, Bregman's method is studied within the powerful framework of Convex Analysis. New insights are obtained and the rich class of "Bregman/Legendre functions" is introduced. Bregman's method still works, if the underlying function is Bregman/Legendre or more generally if it is Legendre but some constraint qualification holds additionally. The key advantage is the broad applicability and
The fastest mixing Markov process on a graph and a connection to a maximum variance unfolding problem
 SIAM REVIEW
, 2006
"... We consider a Markov process on a connected graph, with edges labeled with transition rates between the adjacent vertices. The distribution of the Markov process converges to the uniform distribution at a rate determined by the second smallest eigenvalue λ2 of the Laplacian of the weighted graph. I ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
We consider a Markov process on a connected graph, with edges labeled with transition rates between the adjacent vertices. The distribution of the Markov process converges to the uniform distribution at a rate determined by the second smallest eigenvalue λ2 of the Laplacian of the weighted graph. In this paper we consider the problem of assigning transition rates to the edges so as to maximize λ2 subject to a linear constraint on the rates. This is the problem of finding the fastest mixing Markov process (FMMP) on the graph. We show that the FMMP problem is a convex optimization problem, which can in turn be expressed as a semidefinite program, and therefore effectively solved numerically. We formulate a dual of the FMMP problem and show that it has a natural geometric interpretation as a maximum variance unfolding (MVU) problem, i.e., the problem of choosing a set of points to be as far apart as possible, measured by their variance, while respecting local distance constraints. This MVU problem is closely related to a problem recently proposed by Weinberger and Saul as a method for “unfolding ” highdimensional data that lies on a lowdimensional manifold. The duality between the FMMP and MVU problems sheds light on both problems, and allows us to characterize and, in some cases, find optimal solutions.
Nonsmooth Analysis of Eigenvalues
 MATHEMATICAL PROGRAMMING
, 1998
"... The eigenvalues of a symmetric matrix depend on the matrix nonsmoothly. This paper describes the nonsmooth analysis of these eigenvalues. In particular, I present a simple formula for the approximate (limiting Frechet) subdifferential of an arbitrary function of the eigenvalues, subsuming earlier re ..."
Abstract

Cited by 35 (10 self)
 Add to MetaCart
The eigenvalues of a symmetric matrix depend on the matrix nonsmoothly. This paper describes the nonsmooth analysis of these eigenvalues. In particular, I present a simple formula for the approximate (limiting Frechet) subdifferential of an arbitrary function of the eigenvalues, subsuming earlier results on convex and Clarke subgradients. As an example I compute the subdifferential of the k'th largest eigenvalue.
Twice Differentiable Spectral Functions
 SIAM J. Matrix Anal. Appl
, 2001
"... A function F on the space of nbyn real symmetric matrices is called spectral if it depends only on the eigenvalues of its argument. Spectral functions are just symmetric functions of the eigenvalues. We show that a spectral function is twice (continuously) dierentiable at a matrix if and only if t ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
A function F on the space of nbyn real symmetric matrices is called spectral if it depends only on the eigenvalues of its argument. Spectral functions are just symmetric functions of the eigenvalues. We show that a spectral function is twice (continuously) dierentiable at a matrix if and only if the corresponding symmetric function is twice (continuously) dierentiable at the vector of eigenvalues. We give a concise and usable formula for the Hessian. Keywords: spectral function, twice dierentiable, eigenvalue optimization, semidenite program, symmetric function, perturbation theory. 2000 Mathematics Subject Classication: 47A55, 15A18, 90C22 1 Introduction In this paper we are interested in functions F of a symmetric matrix argument that are invariant under orthogonal similarity transformations: F (U T AU) = F (A); for all orthogonal U and symmetric A : Department of Combinatorics & Optimization, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada. Email: aslewis@...