Results 11  20
of
35
for consensus on colored networks
 2012 IEEE 51st Annual Conference on Decision and Control (CDC), 2012
"... Abstract — We propose a novel distributed algorithm for one of the most fundamental problems in networks: the average consensus. We view the average consensus as an optimization problem, which allows us to use recent techniques and results from the optimization area. Based on the assumption that a c ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We propose a novel distributed algorithm for one of the most fundamental problems in networks: the average consensus. We view the average consensus as an optimization problem, which allows us to use recent techniques and results from the optimization area. Based on the assumption that a coloring scheme of the network is available, we derive a decentralized, asynchronous, and communicationefficient algorithm that is based on the Alternating Direction Method of Multipliers (ADMM). Our simulations with other stateoftheart consensus algorithms show that the proposed algorithm is the one exhibiting the most stable performance across several network models. I.
Distributed Estimation of Macroscopic Channel Parameters in Dense Cooperative Wireless Networks
"... Abstract—In peertopeer wireless networks, knowledge of the channel quality information of multiple links is fundamental to calibrate cooperative communication/processing techniques and design efficient resource sharing strategies. This paper is focused on distributed estimation algorithms that ena ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—In peertopeer wireless networks, knowledge of the channel quality information of multiple links is fundamental to calibrate cooperative communication/processing techniques and design efficient resource sharing strategies. This paper is focused on distributed estimation algorithms that enable the network to selflearn key environmentdependent parameters that rule the channel quality of all links in the network. Considering an indoor scenario with fixed wireless terminals and moving objects/people in the environment, we parameterize the channel quality of each link in terms of pathloss and Rician Kfactor, modelling these macroparameters according to a sitespecific stochastic model. Contribution of the paper is twofold: a measurement campaign carried out with IEEE 802.15.4 devices to validated the stochastic model; distributed algorithms to estimate the environmentdependent parameters of the model. Various schemes of weighted average consensus are proposed to enable the convergence to the equivalent global (centralized) estimate. Performance analysis is carried out in terms of convergence speed, error at convergence and communication overhead using both experimental and simulated data. I.
Learning in Network Games with Incomplete Information
"... © istockphoto.com/jamie farrant [Asymptotic analysis and tractable implementation of rational behavior] The role of social networks in learning and opinion formation has been demonstrated in a variety of scenarios such as the dynamics of technology adoption [1], consumption behavior [2], organizatio ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
© istockphoto.com/jamie farrant [Asymptotic analysis and tractable implementation of rational behavior] The role of social networks in learning and opinion formation has been demonstrated in a variety of scenarios such as the dynamics of technology adoption [1], consumption behavior [2], organizational behavior [3], and financial markets [4]. The emergence of networkwide social phenomena from local interactions between connected agents has been studied using field data [5]–[7] as well as lab experiments [8], [9]. Interest in opinion dynamics over networks is further amplified by the continuous growth in the amount of time that individuals spend on social media Web sites and the consequent increase in the importance of networked phenomena in social and economic outcomes. As quantitative data
Regret bounds of a distributed saddle point algorithm
 in Proc. Int. Conf. Acoust. Speech Signal Process
, 2015
"... An algorithm to learn optimal actions in distributed convex repeated games is developed. Learning is repeated because cost functions are revealed sequentially and distributed because they are revealed to agents of a network that can exchange information with neighboring nodes only. Learning is measu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
An algorithm to learn optimal actions in distributed convex repeated games is developed. Learning is repeated because cost functions are revealed sequentially and distributed because they are revealed to agents of a network that can exchange information with neighboring nodes only. Learning is measured in terms of the global networked regret, which is the accumulated loss of causal prediction with respect to a centralized clairvoyant agent to which the information of all times and agents is revealed at the initial time. We use a variant of the ArrowHurwicz saddle point algorithm which penalizes local agent disagreement via Lagrange multipliers and leads to a distributed online algorithm. We show that decisions made with this saddle point algorithm lead to regret whose order is not larger than O( T), where T is the total number of rounds of the game. Numerical behavior is illustrated for the particular case of dynamic sensor network estimation across different network sizes, connectivities, and topologies.
1On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems
"... Abstract—This paper investigates convergence properties of scalable algorithms for nonconvex and structured optimization. We focus on two methods that combine the fast convergence properties of augmented Lagrangianbased methods with the separability properties of alternating optimization. The first ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—This paper investigates convergence properties of scalable algorithms for nonconvex and structured optimization. We focus on two methods that combine the fast convergence properties of augmented Lagrangianbased methods with the separability properties of alternating optimization. The first method is adapted from the classic quadratic penalty function method and is called the Alternating Direction Penalty Method (ADPM). Unlike the original quadratic penalty function method, in which singlestep optimizations are adopted, ADPM uses alternating optimization, which in turn is exploited to enable scalability of the algorithm. The second method is the wellknown Alternating Direction Method of Multipliers (ADMM). We show that the ADPM asymptotically converges to a primal feasible point under mild conditions. Moreover, we give numerical evidence to demonstrate the potentials of the ADPM for computing a good objective value. In the case of the ADMM, we give sufficient conditions under which the algorithm asymptotically reaches the standard first order necessary conditions for local optimality. Throughout the paper, we substantiate the theory with numerical examples and finally demonstrate possible applications of ADPM and ADMM to a nonconvex localization problem in wireless sensor networks. Index Terms — Nonconvex Optimization, ADMM, Localization I.
A UNIFIED ALGORITHMIC APPROACH TO DISTRIBUTED OPTIMIZATION
"... We address general optimization problems formulated on networks. Each node in the network has a function, and the goal is to find a vector x ∈ Rn that minimizes the sum of all the functions. We assume that each function depends on a set of components of x, not necessarily on all of them. This crea ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We address general optimization problems formulated on networks. Each node in the network has a function, and the goal is to find a vector x ∈ Rn that minimizes the sum of all the functions. We assume that each function depends on a set of components of x, not necessarily on all of them. This creates additional structure in the problem, which can be captured by the classification scheme we develop. This scheme not only to enables us to design an algorithm that solves very general distributed optimization problems, but also allows us to categorize prior algorithms and applications. Our generalpurpose algorithm shows a performance superior to prior algorithms, including algorithms that are applicationspecific. Index Terms — Distributed optimization, sensor networks 1.
ENTANGLED KALMAN FILTERS FOR COOPERATIVE ESTIMATION
"... In this paper we propose a distributed estimation scheme for tracking the state of a GaussMarkov model by means of independent observations at sensors connected in a network. Our emphasis is on low communication demands to alleviate the burden on eventually batterypowered sensors, which will lim ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we propose a distributed estimation scheme for tracking the state of a GaussMarkov model by means of independent observations at sensors connected in a network. Our emphasis is on low communication demands to alleviate the burden on eventually batterypowered sensors, which will limit the achievable performance with respect to an ideal centralized Kalman filter with access to all sensors measurements. The cooperation is performed in a distributed way to guarantee scalability and robustness to failures, and it is designed to reduce the detrimental effects of the channel noise on the sensor exchanges. 1.
Distributed inference over . . .
, 2010
"... Distributed inference has applications in fields as varied as source localization, evaluation of network quality, and remote monitoring of wildlife habitats. In this dissertation, distributed inference algorithms over multipleaccess channels are considered. The performance of these algorithms and t ..."
Abstract
 Add to MetaCart
Distributed inference has applications in fields as varied as source localization, evaluation of network quality, and remote monitoring of wildlife habitats. In this dissertation, distributed inference algorithms over multipleaccess channels are considered. The performance of these algorithms and the effects of wireless communication channels on the performance are studied. In a first class of problems, distributed inference over fading Gaussian multipleaccess channels with amplifyandforward is considered. Sensors observe a phenomenon and transmit their observations using the amplifyandforward scheme to a fusion center (FC). Distributed estimation is considered with a single antenna at the FC, where the performance is evaluated using the asymptotic variance of the estimator. The loss in performance due to varying assumptions on the limited amounts of channel information at the sensors is quantified. With multiple antennas at the FC, a distributed detection problem is also considered, where the error exponent is used to evaluate performance. It is shown that for zeromean channels between the sensors and the FC when there is no
Distributed Gradient Methods with Variable Number of Working Nodes
"... AbstractWe consider distributed optimization where N nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration k, performs an update (is active) with probability p k , ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractWe consider distributed optimization where N nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration k, performs an update (is active) with probability p k , and stays idle (is inactive) with probability 1 − p k . Whenever active, each node performs an update by weightaveraging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability p k grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when p k grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of pernode communications and pernode gradient evaluations (computational cost) for the same required accuracy.