Results 1  10
of
48
Programming Parallel Algorithms
, 1996
"... In the past 20 years there has been treftlendous progress in developing and analyzing parallel algorithftls. Researchers have developed efficient parallel algorithms to solve most problems for which efficient sequential solutions are known. Although some ofthese algorithms are efficient only in a th ..."
Abstract

Cited by 192 (9 self)
 Add to MetaCart
In the past 20 years there has been treftlendous progress in developing and analyzing parallel algorithftls. Researchers have developed efficient parallel algorithms to solve most problems for which efficient sequential solutions are known. Although some ofthese algorithms are efficient only in a theoretical framework, many are quite efficient in practice or have key ideas that have been used in efficient implementations. This research on parallel algorithms has not only improved our general understanding ofparallelism but in several cases has led to improvements in sequential algorithms. Unf:ortunately there has been less success in developing good languages f:or prograftlftling parallel algorithftls, particularly languages that are well suited for teaching and prototyping algorithms. There has been a large gap between languages
Honeycomb Networks: Topological Properties and Communication Algorithms
 IEEE Trans. Parallel and Distributed Systems
, 1997
"... Abstract—The honeycomb mesh, based on hexagonal plane tessellation, is considered as a multiprocessor interconnection network. A honeycomb mesh network with n nodes has degree 3 and diameter ª 1.63 n1, which is 25 percent smaller degree and 18.5 percent smaller diameter than the meshconnected comp ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
Abstract—The honeycomb mesh, based on hexagonal plane tessellation, is considered as a multiprocessor interconnection network. A honeycomb mesh network with n nodes has degree 3 and diameter ª 1.63 n1, which is 25 percent smaller degree and 18.5 percent smaller diameter than the meshconnected computer with approximately the same number of nodes. Vertex and edge symmetric honeycomb torus network is obtained by adding wraparound edges to the honeycomb mesh. The network cost, defined as the product of degree and diameter, is better for honeycomb networks than for the two other families based on square (meshconnected computers and tori) and triangular (hexagonal meshes and tori) tessellations. A convenient addressing scheme for nodes is introduced which provides simple computation of shortest paths and the diameter. Simple and optimal (in the number of required communication steps) routing, broadcasting, and semigroup computation algorithms are developed. The average distance in honeycomb torus with n nodes is proved to be approximately 0.54 n. In addition to honeycomb meshes bounded by a regular hexagon, we consider also honeycomb networks with rhombus and rectangle as the bounding polygons.
Capturing the Connectivity of HighDimensional Geometric Spaces by Parallelizable Random Sampling Techniques
 IN ADVANCES IN RANDOMIZED PARALLEL COMPUTING, P.M. PARDALOS AND S. RAJASEKARAN (EDS.), COMBINATORIAL OPTIMIZATION SERIES
, 1999
"... Applications such as robot programming, design for manufactur ing, animation of digital actors, rationale drug design, and surgical planning, require computing paths in highdimensional geometric spaces, a provably hard problem. Recently, a general pathplanning approach based on a parallelizabl ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
Applications such as robot programming, design for manufactur ing, animation of digital actors, rationale drug design, and surgical planning, require computing paths in highdimensional geometric spaces, a provably hard problem. Recently, a general pathplanning approach based on a parallelizable random sampling scheme has emerged as an effective approach to solve this problem. In this approach, the path planner captures the connectivity of a space F by building a probabilistic roadmap, a network of simple paths connecting points picked at random in F. This paper combines results previously presented in separate papers. It describes a basic probabilistic roadmap planner that is easily parallelizable, and it analyzes the performance of this planner as a function of how well F satisfies geometric properties called egoodness, expansiveness, and path clearance. While egoodness allows us to study how well a probabilistic roadmap covers F, expansiveness and path clearance allow us to compare the connectedness of the roadmap to that of F.
The Characterization of DataAccumulating Algorithms
 Proceedings of the International Parallel Processing Symposium
, 1998
"... A dataaccumulating algorithm (dalgorithm for short) works on an input considered as a virtually endless stream. The computation terminates when all the currently arrived data have been processed before another datum arrives. In this paper, the class of dalgorithms is characterized. It is shown th ..."
Abstract

Cited by 20 (18 self)
 Add to MetaCart
A dataaccumulating algorithm (dalgorithm for short) works on an input considered as a virtually endless stream. The computation terminates when all the currently arrived data have been processed before another datum arrives. In this paper, the class of dalgorithms is characterized. It is shown that this class is identical to the class of online algorithms. The parallel implementation of dalgorithms is then investigated. It is found that, in general, the speedup achieved through parallelism can be made arbitrarily large for almost any such algorithm. On the other hand, we prove that for dalgorithms whose static counterparts manifest only unitary speedup, no improvement is possible through parallel implementation.
Timings for Associative Operations on the MASC Model
, 2001
"... The MASC (Multiple Associative Computing) model is a generalized associativestyle computational model that naturally supports massive dataparallelism and also controlparallelism. A wide range of applications has been developed on this model. Recent research has compared its power to the power of ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
The MASC (Multiple Associative Computing) model is a generalized associativestyle computational model that naturally supports massive dataparallelism and also controlparallelism. A wide range of applications has been developed on this model. Recent research has compared its power to the power of other popular parallel models such as the PRAM and MMB models using simulations. However, the simulation of MMB has identified some important issues regarding the cost of certain basic MASC operations required for associative computing such as broadcasts, reductions, and associative searches. This paper investigates these issues and gives background information and an analysis of timings for these operations, based on implementation techniques and comparison fairness with respect to other models. It aims to provide justification and clarify arguments on the timings for these constanttime or nearly constanttime basic MASC operations.
A Transformational Framework for Skeletal Programs: Overview and Case Study
 Parallel and Distributed Processing. IPPS/SPDP’99 Workshops Proceedings. Lecture Notes in Computer Science 1586
, 1999
"... A structured approach to parallel programming allows to construct applications by composing skeletons, i.e., recurring patterns of task and dataparallelism. First academic and commercial experience with skeletonbased systems has demonstrated both the benefits of the approach and the lack of a spe ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
A structured approach to parallel programming allows to construct applications by composing skeletons, i.e., recurring patterns of task and dataparallelism. First academic and commercial experience with skeletonbased systems has demonstrated both the benefits of the approach and the lack of a special methodology for algorithm design and performance prediction. In the paper, we take a first step toward such a methodology, by developing a general transformational framework named FAN, and integrating it with an existing skeletonbased programming system, P3L. The framework includes a new functional abstract notation for expressing parallel algorithms, a set of semanticspreserving transformation rules, and analytical estimates of the rules' impact on the program performance. The use of FAN is demonstrated on a case study: we design a parallel algorithm for the maximum segment sum problem, translate the algorithm in P3L, and experiment with the target C+MPI code on a Fujitsu AP1000 parallel machine.
Simulation of Enhanced Meshes with MASC, a MSIMD Model
 in Proc. of the 11th International Conference on Parallel and Distributed Computing Systems
, 1999
"... Abstract: MASC (for Multiple Associative Computing) is a joint control parallel, data parallel model that provides a practical, highly scalable model that naturally supports small to massive parallelism and a wide range of applications. In this paper, we present efficient algorithms for a MASC model ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Abstract: MASC (for Multiple Associative Computing) is a joint control parallel, data parallel model that provides a practical, highly scalable model that naturally supports small to massive parallelism and a wide range of applications. In this paper, we present efficient algorithms for a MASC model with a 2D mesh to simulate enhanced meshes. Let MASC(n, j) denote a MASC model with n processing elements and j instruction streams. It is shown that a MASC(n, j) model with a 2D mesh is strictly more powerful than a n × n MMB (Mesh with Multiple Broadcasting) when j =Ω ( n). Simulation of a n × n MMB by MASC(n, j) with a 2D mesh runs in O(1) time and requires no extra memory. Simulating a n × n BRM (Basic Reconfigurable Mesh) with MASC(n, j) with a 2D mesh takes O ( n) extra time with O(n) extra memory when j = Ω ( n). The reverse simulations of MMB or BRM with MASC with a 2D mesh is also given. These simulations not only provide information about the power of the MASC model and also provide an automatic conversion of numerous algorithms designed for enhanced meshes to the MASC model. Key Words: parallel models of computation, associative computing, simulation, mesh with multiple broadcasting, enhanced meshes, MSIMD 1.
PRO: A Model for the Design and Analysis of Efficient and Scalable Parallel Algorithms
 NORDIC JOURNAL OF COMPUTING
, 2006
"... We present a new parallel computation model called the Parallel ResourceOptimal computation model. PRO is a framework being proposed to enable the design of efficient and scalable parallel algorithms in an architectureindependent manner, and to simplify the analysis of such algorithms. A focus on ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We present a new parallel computation model called the Parallel ResourceOptimal computation model. PRO is a framework being proposed to enable the design of efficient and scalable parallel algorithms in an architectureindependent manner, and to simplify the analysis of such algorithms. A focus on three key features distinguishes PRO from existing parallel computation models. First, the design and analysis of a parallel algorithm in the PRO model is performed relative to the time and space complexity of a specific sequential algorithm. Second, a PRO algorithm is required to be both time and spaceoptimal relative to the reference sequential algorithm. Third, the quality of a PRO algorithm is measured by the maximum number of processors that can be employed while optimality is maintained. Inspired by the Bulk Synchronous Parallel model, an algorithm in the PRO model is organized as a sequence of supersteps. Each superstep consists of distinct computation and communication phases, but the supersteps are not required to be separated by synchronization barriers. Both computation and communication costs are accounted for in the runtime analysis of a PRO algorithm. Experimental results on parallel algorithms designed using the PRO model—and implemented using its accompanying programming environment SSCRAP—demonstrate that the model indeed delivers efficient and scalable implementations on a wide range of platforms.
Integer Sorting and Routing in Arrays with Reconfigurable Optical Buses
, 1996
"... In this paper we present deterministic algorithms for integer sorting and online packet routing on arrays with reconfigurable optical buses. The main objective is to identify the mechanisms specific to this type of architectures that allow us to build efficient integer sorting, partial permutation ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In this paper we present deterministic algorithms for integer sorting and online packet routing on arrays with reconfigurable optical buses. The main objective is to identify the mechanisms specific to this type of architectures that allow us to build efficient integer sorting, partial permutation routing and hrelations algorithms. The consequences of these results on the PRAM simulation complexity are also investigated. Keywords: Optical pipelined buses, reconfigurable array, sorting, routing. 1. Introduction In largescale general purpose parallel machines based on connection networks, efficient communication capabilities are essential in order to solve most of the problems of interest in a timely manner. Interprocessor communication networks are often the main bottlenecks in parallel machines. One important limitation of these networks concerns the exclusive access to the bus resources, which limits throughput to a function of the endtoend propagation time. Optical communicati...
Computing In The Presence Of Uncertainty: Disturbing The Peace
 Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, Las Vegas
, 2002
"... Are there computations whose characteristics are akin to certain unique phenomena that are witnessed in dierent domains of science? We are particularly interested in systems whose parameters are altered unpredictably whenever one of these parameters is measured or modi ed. For example, is there ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Are there computations whose characteristics are akin to certain unique phenomena that are witnessed in dierent domains of science? We are particularly interested in systems whose parameters are altered unpredictably whenever one of these parameters is measured or modi ed. For example, is there a computational environment in which the uncertainty principle of digital signal processing and Le Ch^atelier's principle of chemical systems in equilibrium are manifested simultaneously ? A positive answer might uncover computations that are inherently parallel in the strong sense, meaning that they are eciently executed in parallel, but impossible to carry out sequentially.