Results 1 
5 of
5
The Generalized Dimension Exchange Method for Load Balancing in kary ncubes and Variants
, 1995
"... The Generalized Dimension Exchange (GDE) method is a fully distributed load balancing method that operates in a relaxation fashion for multicomputers with a direct communication network. It is parameterized by an exchange parameter that governs the splitting of load between a pair of directly conne ..."
Abstract

Cited by 44 (9 self)
 Add to MetaCart
The Generalized Dimension Exchange (GDE) method is a fully distributed load balancing method that operates in a relaxation fashion for multicomputers with a direct communication network. It is parameterized by an exchange parameter that governs the splitting of load between a pair of directly connected processors during load balancing. An optimal would lead to the fastest convergence of the balancing process. Previous work has resulted in the optimal for the binary ncubes. In this paper, we derive the optimal 's for the kary ncube network and its variantsthe ring, the torus, the chain, and the mesh. We establish the relationships between the optimal convergence rates of the method when applied to these structures, and conclude that the GDE method favors high dimensional kary ncubes. We also reveal the superiority of the GDE method to another relaxationbased method, the diffusion method. We further show through statistical simulations that the optimal 's do speed up the GDE...
Iterative Dynamic Load Balancing in Multicomputers
 Journal of Operational Research Society
, 1994
"... Dynamic load balancing in multicomputers can improve the utilization of processors and the efficiency of parallel computations through migrating workload across processors at runtime. We present a survey and critique of dynamic load balancing strategies that are iterative: workload migration is car ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Dynamic load balancing in multicomputers can improve the utilization of processors and the efficiency of parallel computations through migrating workload across processors at runtime. We present a survey and critique of dynamic load balancing strategies that are iterative: workload migration is carried out through transferring processes across nearest neighbor processors. Iterative strategies have become prominent in recent years because of the increasing popularity of pointtopoint interconnection networks for multicomputers. Key words: dynamic load balancing, multicomputers, optimization, queueing theory, scheduling. INTRODUCTION Multicomputers are highly concurrent systems that are composed of many autonomous processors connected by a communication network 1;2 . To improve the utilization of the processors, parallel computations in multicomputers require that processes be distributed to processors in such a way that the computational load is evenly spread among the processors...
Nearest Neighbor Algorithms for Load Balancing in Parallel Computers
, 1995
"... With nearest neighbor load balancing algorithms, a processor makes balancing decisions based on localized workload information and manages workload migrations within its neighborhood. This paper compares a couple of fairly wellknown nearest neighbor algorithms, the dimensionexchange (DE, for shor ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
With nearest neighbor load balancing algorithms, a processor makes balancing decisions based on localized workload information and manages workload migrations within its neighborhood. This paper compares a couple of fairly wellknown nearest neighbor algorithms, the dimensionexchange (DE, for short) and the diffusion (DF, for short) methods and their several variantsthe average dimensionexchange (ADE), the optimallytuned dimensionexchange (ODE), the local average diffusion (ADF) and the optimallytuned diffusion (ODF). The measures of interest are their efficiency in driving any initial workload distribution to a uniform distribution and their ability in controlling the growth of the variance among the processors' workloads. The comparison is made with respect to both oneport and allport communication architectures and in consideration of various implementation strategies including synchronous/asynchronous invocation policies and static/dynamic random workload behaviors. It t...
Short Dominating Paths and Cycles in the Binary Hypercube
, 2001
"... A sequence of binary words of length n is called a cube dominating path, if the Hamming distance between two consecutive words is always one, and every binary word of length n is within Hamming distance one from at least one these words. If also the first and last words are Hamming distance one apar ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A sequence of binary words of length n is called a cube dominating path, if the Hamming distance between two consecutive words is always one, and every binary word of length n is within Hamming distance one from at least one these words. If also the first and last words are Hamming distance one apart, the sequence is called a cube dominating cycle. Bounds on the cardinality of such sequences are given, and it is shown that asymptotically the shortest cube dominating path and cycle consist of 2 n (1 + o(1))=n words. 1
The Utility Problem in Case Based Reasoning
, 1993
"... ed in CaseBased Reasoning: Papers from the 1993 Workshop, July 1112, Washington, D.C., Technical Report WS9301, AAAI Press THE UTILITY PROBLEM IN CASE BASED REASONING ANTHONY G. FRANCIS ASHWIN RAM College of Computing College of Computing Georgia Institute of Technology Georgia Institute of Te ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
ed in CaseBased Reasoning: Papers from the 1993 Workshop, July 1112, Washington, D.C., Technical Report WS9301, AAAI Press THE UTILITY PROBLEM IN CASE BASED REASONING ANTHONY G. FRANCIS ASHWIN RAM College of Computing College of Computing Georgia Institute of Technology Georgia Institute of Technology Atlanta, Georgia 303320280 Atlanta, Georgia 303320280 (404) 3514574 (404) 8539372 centaur@cc.gatech.edu ashwin@cc.gatech.edu ABSTRACT Casebased reasoning systems may suffer from the utility problem, which occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. One of the primary causes of the utility problem is the slowdown of conventional memories as the number of stored items increases. Unrestricted learning algorithms can swamp their memory system, causing the system to slow down more than the average speedup provided by individual learned rules. Massive parallelism is often offered as a solution to this problem. However,...