Results 1  10
of
10
Linearizability: a correctness condition for concurrent objects
, 1990
"... A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent object ..."
Abstract

Cited by 927 (26 self)
 Add to MetaCart
A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object’s operations can be given by pre and postconditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
A methodology for implementing highly concurrent data structures
 In 2nd Symp. Principles & Practice of Parallel Programming
, 1990
"... A con.curren.t object is a data structure shared by concurrent processes. Conventional techniques for implementing concurrent objects typically rely on criticaI sections: ensuring that only one process at a time can operate on the object. Nevertheless, critical sections are poorly suited for asynchr ..."
Abstract

Cited by 320 (12 self)
 Add to MetaCart
A con.curren.t object is a data structure shared by concurrent processes. Conventional techniques for implementing concurrent objects typically rely on criticaI sections: ensuring that only one process at a time can operate on the object. Nevertheless, critical sections are poorly suited for asynchronous systems: if one process is halted or delayed in a critical section, other, nonfaulty processes will be unable to progress. By contrast, a concurrent object implementation is nonblocking if it always guarantees that some process will complete an operation in a finite number of steps, and it is waitfree if it guarantees that each process will complete an operation in a finite number of steps. This paper proposes a new methodology for constructing nonblocking aud waitfree implementations of concurrent objects. The object’s representation and operations are written as st,ylized sequential programs, with no explicit synchronization. Each sequential operation is automatically transformed into a nonblocking or waitfree operation usiug novel synchronization and memory management algorithms. These algorithms are presented for a multiple instruction/multiple data (MIM D) architecture in which n processes communicate by applying read, write, and comparekYswa,p operations to a shared memory. 1
Portable Distributed Priority Queues with MPI
, 1995
"... Part of this work has been presented in [17]. This paper analyzes the performances of portable distributed priority queues by examining the theoretical features required and by comparing various implementations. In spite of intrinsic bottlenecks and induced hotspots, we argue that tree topologies a ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Part of this work has been presented in [17]. This paper analyzes the performances of portable distributed priority queues by examining the theoretical features required and by comparing various implementations. In spite of intrinsic bottlenecks and induced hotspots, we argue that tree topologies are attractive to manage the natural centralized control required for the deletemin operation in order to detect the site which holds the item with the largest priority. We introduce an original perfect balancing to cope with the load variation due to the priority queue operations which continuously modify the overall number of items in the network. For comparison, we introduce the dheap and the binomial distributed priority queue. The purpose of this experiment is to convey, through executions on CrayT3D and MeikoT800, an understanding of the nature of the distributed priority queues, the range of their concurrency and a comparison of their efficiency to reduce requests latency. In particu...
WaitFree Algorithms for Heaps
, 1994
"... This paper examines algorithms to implement heaps on shared memory multiprocessors. A natural model for these machines is an asynchronous parallel machine, in which the processors are subject to arbitrary delays. On such machines, it is desirable for algorithms to be waitfree, meaning that each thr ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper examines algorithms to implement heaps on shared memory multiprocessors. A natural model for these machines is an asynchronous parallel machine, in which the processors are subject to arbitrary delays. On such machines, it is desirable for algorithms to be waitfree, meaning that each thread makes progress independent of the other threads executing on the machine. We present a waitfree algorithm to implement heaps. The algorithms are similar to the general approach given in [4], with optimizations that allow many threads to work on the heap simultaneously, while still guaranteeing a strong serializability property. 1 Introduction We are interested in designing efficient data structures and algorithms for shared memory multiprocessors. Processors on these machines may execute instructions at a varying rate (due to cache behavior, for example), and are subject to long delays (e.g. when swapped out by the scheduler, or after a page fault). Programs are executed by a collection...
An Algorithm for Full Text Indexing
, 1992
"... A fast Btree based indexing algorithm is presented. In some applications, such as full text indexing or indexing of very large tables, the new algorithm can be orders of magnitude faster than conventional Btree insertion algorithms, while still allowing concurrent access. A similar algorithm c ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A fast Btree based indexing algorithm is presented. In some applications, such as full text indexing or indexing of very large tables, the new algorithm can be orders of magnitude faster than conventional Btree insertion algorithms, while still allowing concurrent access. A similar algorithm can be used for deletion.
An Efficient Implementation of Parallel A*
, 1994
"... . This paper presents a new parallel implementation of the heuristic state space search A* algorithm. We show the efficiency of a new utilization of data structure the treap, instead of traditional priority queues (heaps). This data structure allows operations such as Insert, DeleteMin and Search wh ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
. This paper presents a new parallel implementation of the heuristic state space search A* algorithm. We show the efficiency of a new utilization of data structure the treap, instead of traditional priority queues (heaps). This data structure allows operations such as Insert, DeleteMin and Search which are essential in the A* algorithm. Furthermore, we give concurrent algorithm of the treap within a shared memory environment. Results on the 15 puzzle are presented; they have been obtained on two machines, with virtual or not shared memory, the KSR1 and the Sequent Balance 8000. Keywords : Heuristic search, A*, data structure, binary search tree, priority queue, parallelism, concurrence. 1 Introduction Search is a technique widely used in Artificial Intelligence (AI) and Operational Research (OR) for solving Discrete Optimization problems [18, 17, 20, 27]. The space of potential solutions of these problems can be specified, but the difficulty is that its cardinality is too large to be ...
The Outcome of a Knowhow: a BranchandBound Library
 Solving Combinatorial Optimization Problems in Parallel, LNCS
, 1995
"... Introduction Exact methods used to solve difficult Combinatorial Optimization problems belong to a generic type. Widely used in Operations Research (OR) and Artificial Intelligence (AI), they consist of exploring a search space :  the tree of subproblems generated by recursive partitioning of th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Introduction Exact methods used to solve difficult Combinatorial Optimization problems belong to a generic type. Widely used in Operations Research (OR) and Artificial Intelligence (AI), they consist of exploring a search space :  the tree of subproblems generated by recursive partitioning of the initial problem (OR : BranchandBound, denoted B&B and BranchandCut algorithms) ,  the graph of transitions between states (AI : A* algorithm, ff \Gamma fi). Exhaustive search of the space is avoided; the acquisition of knowledge during the search allows to prune certain nodes or to eliminate some parts of the space. But as computational requirements (time and space) grow exponentially with the problem size, the possibilities of overflowing storage or consuming too much time can hang the program before reaching the optimal solution. To deal with this type of exploration which tends to create a combinatorial explos
Parallel and Distributed BranchandBound/A* Algorithms
, 1994
"... In this report, we propose new concurrent data structures and load balancing strategies for BranchandBound (B&B)/A* algorithms in two models of parallel programming : shared and distributed memory. For the shared memory model (SMM), we present a general methodology which allows concurrent manipul ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this report, we propose new concurrent data structures and load balancing strategies for BranchandBound (B&B)/A* algorithms in two models of parallel programming : shared and distributed memory. For the shared memory model (SMM), we present a general methodology which allows concurrent manipulations for most tree data structures, and show its usefulness for implementation on multiprocessors with global shared memory. Some priority queues which are suited for basic operations performed by B&B algorithms are described : the Skewheaps, the funnels and the Splaytrees. We also detail a specific data structure, called treap and designed for A* algorithm. These data structures are implemented on a parallel machine with shared memory : KSR1. For the distributed memory model (DMM), we show that the use of partial cost in the B&B algorithms is not enough to balance nodes between the local queues. Thus, we introduce another notion of priority, called potentiality, between nodes that take...
BOB : a Unified Platform for Implementing BranchandBound like Algorithms
, 1995
"... : In this report, we propose the library BOB for an easy development of the BranchandBound applications (min/maximization). This library has the double goal of allowing on the one hand the Combinatorial Optimization community to implement their applications without worrying about the architecture ..."
Abstract
 Add to MetaCart
: In this report, we propose the library BOB for an easy development of the BranchandBound applications (min/maximization). This library has the double goal of allowing on the one hand the Combinatorial Optimization community to implement their applications without worrying about the architecture of the machines and benefiting the advantages provided by parallelism. On the other hand, BOB offers to the community of Parallelism a set of benchmark composed by the efficient algorithms of Combinatorial Optimization for its parallelization methods and/or tools. To achieve this double goal, the BOB library is founded on the notion of global priority queue which makes the parallelization methods independent from the applications, and viceversa. We describe for this global priority queue different implementation models (asynchronous, synchronous, client/server, ...) according to the type of used machine (serial, parallel with shared or distributed memory). A set of serial and concurrent dat...
General Terms: Theory,
"... A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent object ..."
Abstract
 Add to MetaCart
A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object’s operations can be given by pre and postconditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.