Results 1  10
of
32
Global clock synchronization in sensor networks
 IEEE Transactions on Computers
"... Abstract—Global synchronization is important for many sensor network applications that require precise mapping of collected sensor data with the time of the events, for example, in tracking and surveillance. It also plays an important role in energy conservation in MAC layer protocols. This paper de ..."
Abstract

Cited by 76 (1 self)
 Add to MetaCart
Abstract—Global synchronization is important for many sensor network applications that require precise mapping of collected sensor data with the time of the events, for example, in tracking and surveillance. It also plays an important role in energy conservation in MAC layer protocols. This paper describes four methods to achieve global synchronization in a sensor network: a nodebased approach, a hierarchical clusterbased method, a diffusionbased method, and a faulttolerant diffusionbased method. The diffusionbased protocol is fully localized. We present two implementations of the diffusionbased protocol for synchronous and asynchronous systems and prove its convergence. Finally, we show that, by imposing some constraints on the sensor network, global clock synchronization can be achieved in the presence of malicious nodes that exhibit Byzantine failures. Index Terms—Sensor networks, fault tolerance. æ
The Generalized Dimension Exchange Method for Load Balancing in kary ncubes and Variants
, 1995
"... The Generalized Dimension Exchange (GDE) method is a fully distributed load balancing method that operates in a relaxation fashion for multicomputers with a direct communication network. It is parameterized by an exchange parameter that governs the splitting of load between a pair of directly conne ..."
Abstract

Cited by 44 (9 self)
 Add to MetaCart
The Generalized Dimension Exchange (GDE) method is a fully distributed load balancing method that operates in a relaxation fashion for multicomputers with a direct communication network. It is parameterized by an exchange parameter that governs the splitting of load between a pair of directly connected processors during load balancing. An optimal would lead to the fastest convergence of the balancing process. Previous work has resulted in the optimal for the binary ncubes. In this paper, we derive the optimal 's for the kary ncube network and its variantsthe ring, the torus, the chain, and the mesh. We establish the relationships between the optimal convergence rates of the method when applied to these structures, and conclude that the GDE method favors high dimensional kary ncubes. We also reveal the superiority of the GDE method to another relaxationbased method, the diffusion method. We further show through statistical simulations that the optimal 's do speed up the GDE...
An Optimal Dynamic Load Balancing Algorithm
 Daresbury Laboratory
, 1995
"... The problem of redistributing work load on parallel computers is considered. An optimal redistribution algorithm, which minimises the Euclidean norm of the migrating load, is derived. The problem is further studied by modelling with the unsteady heat conduction equation. Relationship between this al ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
The problem of redistributing work load on parallel computers is considered. An optimal redistribution algorithm, which minimises the Euclidean norm of the migrating load, is derived. The problem is further studied by modelling with the unsteady heat conduction equation. Relationship between this algorithm and other dynamic load balancing algorithms is discussed. Convergence of the algorithm for special graphs is studied. Finally numerical results on randomly generated graphs are given to demonstrate the effectiveness of the algorithm. 1. Introduction To achieve good performance on a parallel computer, it is essential to maintain a balanced work load among all the processors of the computer. Sometimes the load can be balanced statically. However in many cases the load on each processor can not be predicted a priori. One example that demonstrates the need for both static and dynamic load balancing strategies, which is also the main motivation for this paper, is in the parallel finite e...
Automated Parallelization of Discrete Statespace Generation
 Journal of Parallel and Distributed Computing
, 1997
"... We consider the problem of generating a large statespace in a distributed fashion. Unlike previously proposed solutions that partition the set of reachable states according to a hashing function provided by the user, we explore heuristic methods that completely automate the process. The first step ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
We consider the problem of generating a large statespace in a distributed fashion. Unlike previously proposed solutions that partition the set of reachable states according to a hashing function provided by the user, we explore heuristic methods that completely automate the process. The first step is an initial random walk through the state space to initialize a search tree, duplicated in each processor. Then, the reachability graph is built in a distributed way, using the search tree to assign each newly found state to classes assigned to the available processors. Furthermore, we explore two remapping criteria that attempt to balance memory usage or future workload, respectively. We show how the cost of computing the global snapshot required for remapping will scale up for system sizes in the foreseeable future. An extensive set of results is presented to support our conclusions that remapping is extremely beneficial. 1 Introduction Discrete systems are frequently analyzed by genera...
Iterative Dynamic Load Balancing in Multicomputers
 Journal of Operational Research Society
, 1994
"... Dynamic load balancing in multicomputers can improve the utilization of processors and the efficiency of parallel computations through migrating workload across processors at runtime. We present a survey and critique of dynamic load balancing strategies that are iterative: workload migration is car ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Dynamic load balancing in multicomputers can improve the utilization of processors and the efficiency of parallel computations through migrating workload across processors at runtime. We present a survey and critique of dynamic load balancing strategies that are iterative: workload migration is carried out through transferring processes across nearest neighbor processors. Iterative strategies have become prominent in recent years because of the increasing popularity of pointtopoint interconnection networks for multicomputers. Key words: dynamic load balancing, multicomputers, optimization, queueing theory, scheduling. INTRODUCTION Multicomputers are highly concurrent systems that are composed of many autonomous processors connected by a communication network 1;2 . To improve the utilization of the processors, parallel computations in multicomputers require that processes be distributed to processors in such a way that the computational load is evenly spread among the processors...
Nearest Neighbor Algorithms for Load Balancing in Parallel Computers
, 1995
"... With nearest neighbor load balancing algorithms, a processor makes balancing decisions based on localized workload information and manages workload migrations within its neighborhood. This paper compares a couple of fairly wellknown nearest neighbor algorithms, the dimensionexchange (DE, for shor ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
With nearest neighbor load balancing algorithms, a processor makes balancing decisions based on localized workload information and manages workload migrations within its neighborhood. This paper compares a couple of fairly wellknown nearest neighbor algorithms, the dimensionexchange (DE, for short) and the diffusion (DF, for short) methods and their several variantsthe average dimensionexchange (ADE), the optimallytuned dimensionexchange (ODE), the local average diffusion (ADF) and the optimallytuned diffusion (ODF). The measures of interest are their efficiency in driving any initial workload distribution to a uniform distribution and their ability in controlling the growth of the variance among the processors' workloads. The comparison is made with respect to both oneport and allport communication architectures and in consideration of various implementation strategies including synchronous/asynchronous invocation policies and static/dynamic random workload behaviors. It t...
Task Assignment and Transaction Clustering Heuristics for Distributed Systems
 Information Sciences
, 1997
"... In this paper we present discuss the task assignment problem for distributed systems. We also show how this problem is very similar to that of clustering transactions for load balancing purposes and for their efficient execution in a distributed environment. The formalization of these problems in te ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
In this paper we present discuss the task assignment problem for distributed systems. We also show how this problem is very similar to that of clustering transactions for load balancing purposes and for their efficient execution in a distributed environment. The formalization of these problems in terms of a graph theoretic representation of a distributed program, or of a set of related transactions, is given. The cost function which needs to be minimized by an assignment of tasks to processors or of transactions to clusters is detailed, and we survey related work, as well work on the dynamic load balancing problem. Since the task assignment problem is NPhard, we present three novel heuristic algorithms that we have tested for solving it and compare them to the wellknown greedy heuristic. These novel heuristics use neural networks, genetic algorithms and simulated annealing. Both the resulting performance and the computational cost for these algorithms is evaluated on a large number o...
DataParallel Load Balancing Strategies
 Parallel Computing
, 1996
"... Programming irregular and dynamic dataparallel algorithms requires to take data distribution into account. The implementation of a load balancing algorithm is a quite difficult task for the programmer. However, a load balancing strategy may be developed independently of the application. The integra ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Programming irregular and dynamic dataparallel algorithms requires to take data distribution into account. The implementation of a load balancing algorithm is a quite difficult task for the programmer. However, a load balancing strategy may be developed independently of the application. The integration of such a strategy in the dataparallel algorithm may be relevant to a library or a dataparallel compiler runtime. We propose load distribution dataparallel algorithms for a class of irregular dataparallel algorithms called stack algorithms. Our algorithms allow the use of regular and/or irregular communication patterns to exchange the works between processors. The results of theoretical analysis of these algorithms are presented. They allow a comparison of the different load balancing algorithms and the identification of criterion for the choice of a load balancing algorithm.
An Analytical Comparison of Nearest Neighbor Algorithms for Load Balancing in Parallel Computers
, 1995
"... With nearest neighbor load balancing algorithms, a processor makes balancing decisions based on its local information and manages workload migrations within its neighborhood. This paper compares a couple of fairly wellknown nearest neighbor algorithms, the dimension exchange and the diffusion metho ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
With nearest neighbor load balancing algorithms, a processor makes balancing decisions based on its local information and manages workload migrations within its neighborhood. This paper compares a couple of fairly wellknown nearest neighbor algorithms, the dimension exchange and the diffusion methods and their variants in terms of their performances in both oneport and allport communication architectures. It turns out that the dimension exchange method outperforms the diffusion method in the oneport communication model, and that the strength of the diffusion method is in asynchronous implementations in the allport communication model. The underlying communication networks considered assume the most popular topologies, the mesh and the torus and their special cases: the hypercube and the kary ncube. 1 Introduction Massively parallel computers have been shown to be very efficient at solving problems that can be partitioned into tasks with static computation and communication patt...
Optimal Parameters For Load Balancing With The Diffusion Method In Mesh Networks
, 1994
"... The diffusion method is a simple distributed load balancing method for distributed memory multiprocessors. It operates in a relaxation fashion for pointtopoint networks. Its convergence to the balanced state relies on the value of a parameterthe diffusion parameter. An optimal diffusion para ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
The diffusion method is a simple distributed load balancing method for distributed memory multiprocessors. It operates in a relaxation fashion for pointtopoint networks. Its convergence to the balanced state relies on the value of a parameterthe diffusion parameter. An optimal diffusion parameter would lead to the fastest convergence of the method. Previous results on optimal parameters have existed for the kary ncube and the torus. In this paper, we derive optimal diffusion parameters for mesh networks.