Results 1  10
of
35
Distributed Kalman filtering based on consensus strategies
, 2007
"... In this paper, we consider the problem of estimating the state of a dynamical system from distributed noisy measurements. Each agent constructs a local estimate based on its own measurements and estimates from its neighbors. Estimation is performed via a two stage strategy, the first being a Kalman ..."
Abstract

Cited by 60 (1 self)
 Add to MetaCart
In this paper, we consider the problem of estimating the state of a dynamical system from distributed noisy measurements. Each agent constructs a local estimate based on its own measurements and estimates from its neighbors. Estimation is performed via a two stage strategy, the first being a Kalmanlike measurement update which does not require communication, and the second being an estimate fusion using a consensus matrix. In particular we study the interaction between the consensus matrix, the number of messages exchanged per sampling time, and the Kalman gain. We prove that optimizing the consensus matrix for fastest convergence and using the centralized optimal gain is not necessarily the optimal strategy if the number of exchanged messages per sampling time is small. Moreover, we showed that although the joint optimization of the consensus matrix and the Kalman gain is in general a nonconvex problem, it is possible to compute them under some important scenarios. We also provide some numerical examples to clarify some of the analytical results and compare them with alternative estimation strategies.
Optimal Motion Strategies for Rangeonly Constrained Multisensor Target Tracking
, 2006
"... Abstract—In this paper, we study the problem of optimal trajectory generation for a team of mobile sensors tracking a moving target using distanceonly measurements. This problem is shown to be NPHard, in general, when constraints are imposed on the speed of the sensors. We propose two algorithms, ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we study the problem of optimal trajectory generation for a team of mobile sensors tracking a moving target using distanceonly measurements. This problem is shown to be NPHard, in general, when constraints are imposed on the speed of the sensors. We propose two algorithms, modified GaussSeidelrelaxation and LPrelaxation, for determining the set of feasible locations that each sensor should move to in order to collect the most informative measurements; i.e., distance measurements that minimize the uncertainty about the position of the target. Furthermore, we prove that the motion strategy that minimizes the trace of the position error covariance matrix is equivalent to the one that maximizes the minimum eigenvalue of its inverse. The two proposed algorithms are applicable regardless of the process model that is employed for describing the motion of the target, while the computational complexity of both methods is linear in the number of sensors. Extensive simulation results are presented demonstrating that the performance attained with the proposed methods is comparable to that obtained with gridbased exhaustive search, whose computational cost is exponential in the number of sensors, and significantly better than that of a random, towards the target, motion strategy.
DADMM: A communicationefficient distributed algorithm for separable optimization
 IEEE Trans. Sig. Proc
, 2013
"... ar ..."
Decentralized sparse signal recovery for compressive sleeping wireless sensor networks,”
 IEEE Transactions on Signal Processing,
, 2010
"... ..."
(Show Context)
BStability analysis of the consensusbased distributed LMS algorithm
 in Proc. 33rd Int. Conf. Acoust. Speech Signal Process., Las Vegas, NV
"... We deal with consensusbased online estimation and tracking of (non) stationary signals using ad hoc wireless sensor networks (WSNs). A distributed (D) leastmean square (LMS) like algorithm is developed, which offers simplicity and flexibility, while it solely relies on singlehop communications ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
We deal with consensusbased online estimation and tracking of (non) stationary signals using ad hoc wireless sensor networks (WSNs). A distributed (D) leastmean square (LMS) like algorithm is developed, which offers simplicity and flexibility, while it solely relies on singlehop communications among sensors. Starting from a pertinent squarederror cost, we apply the alternatingdirection method of multipliers to minimize it in a distributed fashion; and utilize stochastic approximation tools to eliminate the need for a complete statistical characterization of the processes of interest. By resorting to stochastic averaging and perturbed Lyapunov techniques, we further establish that local estimates are exponentially convergent to the true parameter of interest when observations are noise free and linearly related to it. This convergence result is necessary for bounding the estimation error in the presence of noise, and holds not only when regressors are white across time but even when they exhibit temporal correlations. Numerical tests confirm the merits of the novel DLMS algorithm and its stability analysis. Index Terms — Distributed estimation, Distributed algorithms, Adaptive signal processing
SelfOrganized Collective Decision Making: The Weighted Voter Model
, 2014
"... The information provided is the sole responsibility of the authors and does not necessarily reflect the opinion of the members of IRIDIA. The authors take full responsibility for any copyright breaches that may result from publication of this paper in the IRIDIA – Technical Report Series. IRIDIA is ..."
Abstract

Cited by 9 (9 self)
 Add to MetaCart
(Show Context)
The information provided is the sole responsibility of the authors and does not necessarily reflect the opinion of the members of IRIDIA. The authors take full responsibility for any copyright breaches that may result from publication of this paper in the IRIDIA – Technical Report Series. IRIDIA is not responsible for any use that might be made of data appearing in this publication.
Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems
"... ar ..."
A stochastic coordinate descent primaldual algorithm and applications to largescale composite optimization,
, 2014
"... AbstractBased on the idea of randomized coordinate descent of αaveraged operators, a randomized primaldual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed b ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
AbstractBased on the idea of randomized coordinate descent of αaveraged operators, a randomized primaldual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed by Vũ and Condat that includes the well known ADMM as a particular case. The obtained algorithm is used to solve asynchronously a distributed optimization problem. A network of agents, each having a separate cost function containing a differentiable term, seek to find a consensus on the minimum of the aggregate objective. The method yields an algorithm where at each iteration, a random subset of agents wake up, update their local estimates, exchange some data with their neighbors, and go idle. Numerical results demonstrate the attractive performance of the method. The general approach can be naturally adapted to other situations where coordinate descent convex optimization algorithms are used with a random choice of the coordinates.
Consensus with Robustness to Outliers via Distributed Optimization
"... Abstract — Over the past few years, a number of distributed algorithms have been developed for integrating the measurements acquired by a wireless sensor network. Among them, average consensus algorithms have drawn significant attention due to a number of practical advantages, such as robustness to ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract — Over the past few years, a number of distributed algorithms have been developed for integrating the measurements acquired by a wireless sensor network. Among them, average consensus algorithms have drawn significant attention due to a number of practical advantages, such as robustness to noise in the measurements, robustness to changes in the network topology and guaranteed convergence to the centralized solution. However, one of the main drawbacks of existing consensus algorithms is their inability to handle outliers in the measurements. This is because they are based on minimizing a Euclidean (L2) loss function, which is known to be sensitive to outliers. In this paper, we propose a distributed optimization framework that can handle outliers in the measurements. The proposed framework generalizes consensus algorithms to robust loss functions that are strictly convex or convex, such as the Huber loss or the L1loss. This generalization is achieved by posing the robust consensus problem as a constrained optimization problem, which is solved using distributed versions of classical primaldual and augmented Lagrangian optimization methods. The resulting algorithms include the classical average consensus as a particular case. Synthetic experiments evaluate our robust consensus framework for several robust cost functions and show their advantages over the classical average consensus algorithm. I.
BFaster linear iterations for distributed averaging
 in Proc. IFAC World Congr., Seoul, South Korea
, 2008
"... Abstract: Distributed averaging problems are a subclass of distributed consensus problems, which have received substantial attention from several research communities. Although many of the proposed algorithms are linear iterations, they vary both in structure and state dimension. In this paper, we i ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract: Distributed averaging problems are a subclass of distributed consensus problems, which have received substantial attention from several research communities. Although many of the proposed algorithms are linear iterations, they vary both in structure and state dimension. In this paper, we investigate the performance benefits of adding extra states to distributed averaging iterations. We establish conditions for convergence and discuss possible ways of optimizing the convergence rates. By numerical examples, it is shown that the performance can be significantly increased by adding extra states. Finally, we provide necessary and sufficient conditions for convergence of a more general version of distributed averaging iterations. 1.