Results 1  10
of
22
Randomized Gossip Algorithms
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join a ..."
Abstract

Cited by 208 (5 self)
 Add to MetaCart
Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of “gossip ” algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the socalled Preferential Connectivity (PC) model.
Gossip algorithms: Design, analysis and applications. 2004. Submitted, available at www.stanford.edu/~devavrat/infocom05.ps
"... Ahtruct Motivated by applications to sensor, peertopeer and ad hoc networks, we study distributed asynchronous algorithms, also known as gossip algorithms, for computation and information exchange in an arbitrarily connected network of nodes. Nodes in such networks operate under limited computatio ..."
Abstract

Cited by 158 (14 self)
 Add to MetaCart
Ahtruct Motivated by applications to sensor, peertopeer and ad hoc networks, we study distributed asynchronous algorithms, also known as gossip algorithms, for computation and information exchange in an arbitrarily connected network of nodes. Nodes in such networks operate under limited computational, communication and energy resources. These constraints naturally give rise to "gossip " algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for arbitrary network, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic mairix characterizing the algorithm. Using recent results of Boyd, Diaconis and Xiao
A convergent incremental gradient method with constant step size
 SIAM J. OPTIM
, 2004
"... An incremental gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
An incremental gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits regions in which the gradient is small infinitely often. Under certain unimodality assumptions, global convergence is established. In the quadratic case, a global linear rate of convergence is shown. The method is applied to distributed optimization problems arising in wireless sensor networks, and numerical experiments compare the new method with the standard incremental gradient method.
Convergent incremental optimization transfer algorithms: Application to tomography
 IEEE Trans. Med. Imag., Submitted
"... Abstract—No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Abstract—No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and methods based on the incremental expectation maximization (EM) approach (Hsiao et al., 2002). This paper generalizes the incremental EM approach by introducing a general framework that we call “incremental optimization transfer. ” Like incremental EM methods, the proposed algorithms accelerate convergence speeds and ensure global convergence (to a stationary point) under mild regularity conditions without requiring inconvenient relaxation parameters. The general optimization transfer framework enables the use of a very broad family of nonEM surrogate functions. In particular, this paper provides the first convergent OStype algorithm for transmission tomography. The general approach is applicable to both monoenergetic and polyenergetic transmission scans as well as to other image reconstruction problems. We propose a particular incremental optimization transfer method for (nonconcave) penalizedlikelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS). Results show that the new “transmission incremental optimization transfer (TRIOT) ” algorithm is faster than nonincremental ordinary SPS and even OSSPS yet is convergent. I.
Learning via Linear Operators: Maximum Margin Regression
 In Proceedings of 2001 IEEE International Conference on Data Mining
, 2005
"... We introduce a maximum margin framework realizing a regression type learning in an arbitrary Hilbert space whilst the corresponding dual problem preserving the structure and, therefore, the complexity that of the binary Support Vector Machine(SVM). We demonstrate via some examples this learning fram ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We introduce a maximum margin framework realizing a regression type learning in an arbitrary Hilbert space whilst the corresponding dual problem preserving the structure and, therefore, the complexity that of the binary Support Vector Machine(SVM). We demonstrate via some examples this learning framework is broadly applicable in several seemingly different problems. One example is the multiclass classification problem which, in this way, can be implemented with the complexity of a binary SVM. The reduction of the complexity does not involve diminishing performance but, in some cases this approach can improve the classification accuracy. The multiclass classification is realized where the output labels are vector valued. Other examples implement multiview learning problems.
Incremental stochastic subgradient algorithms for convex optimization
 SIAM J. Optim
"... Abstract. In this paper we study the effect of stochastic errors on two constrained incremental subgradient algorithms. We view the incremental subgradient algorithms as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known o ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Abstract. In this paper we study the effect of stochastic errors on two constrained incremental subgradient algorithms. We view the incremental subgradient algorithms as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. We first study the standard cyclic incremental subgradient algorithm in which the agents form a ring structure and pass the iterate in a cycle. We consider the method with stochastic errors in the subgradient evaluations and provide sufficient conditions on the moments of the stochastic errors that guarantee almost sure convergence when a diminishing stepsize is used. We also obtain almost sure bounds on the algorithm’s performance when a constant stepsize is used. We then consider the Markov randomized incremental subgradient method, which is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. We establish the convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant stepsizes, respectively. 1. Introduction. A
Distributed stochastic subgradient projection algorithms for convex optimization
 Journal of Optimization Theory and Applications
, 2010
"... Abstract. We consider a distributed multiagent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Abstract. We consider a distributed multiagent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and nondiminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.
Approximation Accuracy, Gradient Methods, and Error Bound for Structured Convex Optimization
, 2009
"... Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this is an error bound for the linear convergence analysis of firstorder gradient methods for solving these problems. Example applications include compressed sensing, variable selection in regression, TVregularized image denoising, and sensor network localization.
ThTA12.5 Subgradient Methods and Consensus Algorithms for Solving Convex Optimization Problems
"... Abstract — In this paper we propose a subgradient method for solving coupled optimization problems in a distributed way given restrictions on the communication topology. The iterative procedure maintains local variables at each node and relies on local subgradient updates in combination with a conse ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract — In this paper we propose a subgradient method for solving coupled optimization problems in a distributed way given restrictions on the communication topology. The iterative procedure maintains local variables at each node and relies on local subgradient updates in combination with a consensus process. The local subgradient steps are applied simultaneously as opposed to the standard sequential or cyclic procedure. We study convergence properties of the proposed scheme using results from consensus theory and approximate subgradient methods. The framework is illustrated on an optimal distributed finitetime rendezvous problem. I.