Results 1 
4 of
4
Compact and Localized Distributed Data Structures
 JOURNAL OF DISTRIBUTED COMPUTING
, 2001
"... This survey concerns the role of data structures for compactly storing and representing various types of information in a localized and distributed fashion. Traditional approaches to data representation are based on global data structures, which require access to the entire structure even if the sou ..."
Abstract

Cited by 76 (23 self)
 Add to MetaCart
This survey concerns the role of data structures for compactly storing and representing various types of information in a localized and distributed fashion. Traditional approaches to data representation are based on global data structures, which require access to the entire structure even if the sought information involves only a small and local set of entities. In contrast, localized data representation schemes are based on breaking the information into small local pieces, or labels, selected in a way that allows one to infer information regarding a small set of entities directly from their labels, without using any additional (global) information. The survey focuses on combinatorial and algorithmic techniques, and covers complexity results on various applications, including compact localized schemes for message routing in communication networks, and adjacency and distance labeling schemes.
Parallel bestfirst search of statespace graphs: A summary of results
 In Proceedings of the 1988 National Conference on Artificial Intelligence (AAAI88
, 1988
"... This paper presents many different parallel formulations of the A*/BranchandBound search algorithm. The parallel formulations primarily differ in the data structures used. Some formulations are suited only for sharedmemory architectures, whereas others are suited for distributedmemory archite ..."
Abstract

Cited by 54 (4 self)
 Add to MetaCart
This paper presents many different parallel formulations of the A*/BranchandBound search algorithm. The parallel formulations primarily differ in the data structures used. Some formulations are suited only for sharedmemory architectures, whereas others are suited for distributedmemory architectures as well. These parallel formulations have been implemented to solve the vertex cover problem and the TSP problem on the BBN Butterfly parallel processor. Using appropriate data structures, we are able to obtain fairly linear speedups for as many as 100 processors. We also discovered problem characteristics that make certain formulations more (or less) suitable for some search problems. Since the bestfirst search paradigm of A*/BranchandBound is very commonly used, we expect these parallel formulations to be effective for a variety of problems. Concurrent and distributed priority queues used in these parallel formulations can be used in many parallel algorithms other than parallel A*/branchandbound. 1
Parallel and Distributed BranchandBound/A* Algorithms
, 1994
"... In this report, we propose new concurrent data structures and load balancing strategies for BranchandBound (B&B)/A* algorithms in two models of parallel programming : shared and distributed memory. For the shared memory model (SMM), we present a general methodology which allows concurrent man ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this report, we propose new concurrent data structures and load balancing strategies for BranchandBound (B&B)/A* algorithms in two models of parallel programming : shared and distributed memory. For the shared memory model (SMM), we present a general methodology which allows concurrent manipulations for most tree data structures, and show its usefulness for implementation on multiprocessors with global shared memory. Some priority queues which are suited for basic operations performed by B&B algorithms are described : the Skewheaps, the funnels and the Splaytrees. We also detail a specific data structure, called treap and designed for A* algorithm. These data structures are implemented on a parallel machine with shared memory : KSR1. For the distributed memory model (DMM), we show that the use of partial cost in the B&B algorithms is not enough to balance nodes between the local queues. Thus, we introduce another notion of priority, called potentiality, between nodes that take...
On the Parallelization of Greedy Regression Tables
, 1999
"... This paper presents PGRT, a parallel version of a best first planner based on the Greedy Regression Tables approach. The parallelization method of PGRT distributes the task of extracting applicable actions to a given state among the available processors. Although the number of operators limits the s ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper presents PGRT, a parallel version of a best first planner based on the Greedy Regression Tables approach. The parallelization method of PGRT distributes the task of extracting applicable actions to a given state among the available processors. Although the number of operators limits the scalability of PGRT, it has proven to be quite efficient for low scale parallelization. A modified Operator Reordering method has been used in order to achieve further increase in the efficiency of the parallel algorithm. We illustrate the speedup of PGRT on a variety of hard logistics problems, adopted from the AIPS98 planning competition.