Results 1  10
of
18
Analysis of Shellsort and related algorithms
 ESA ’96: Fourth Annual European Symposium on Algorithms
, 1996
"... This is an abstract of a survey talk on the theoretical and empirical studies that have been done over the past four decades on the Shellsort algorithm and its variants. The discussion includes: upper bounds, including linkages to numbertheoretic properties of the algorithm; lower bounds on Shellso ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
(Show Context)
This is an abstract of a survey talk on the theoretical and empirical studies that have been done over the past four decades on the Shellsort algorithm and its variants. The discussion includes: upper bounds, including linkages to numbertheoretic properties of the algorithm; lower bounds on Shellsort and Shellsortbased networks; averagecase results; proposed probabilistic sorting networks based on the algorithm; and a list of open problems. 1 Shellsort The basic Shellsort algorithm is among the earliest sorting methods to be discovered (by D. L. Shell in 1959 [36]) and is among the easiest to implement, as exhibited by the following C code for sorting an array a[l],..., a[r]: shellsort(itemType a[], int l, int r) { int i, j, h; itemType v;
Parallel Implementation and Practical Use of Sparse Approximate Inverse Preconditioners With a Priori Sparsity Patterns
 Int. J. High Perf. Comput. Appl
, 2001
"... This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsified matrices. Sparsification is necessary when powers ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsified matrices. Sparsification is necessary when powers of a matrix have a large number of nonzeros, making the approximate inverse computation expensive. For our test problems, the minimum solution time is achieved with approximate inverses with fewer than twice the number of nonzeros of the original matrix. Additional accuracy is not compensated by the increased cost per iteration. The results lead to further understanding of how to use these methods and how well these methods work in practice. In addition, this paper describes programming techniques required for high performance, including onesided communication, local coordinate numbering, and load repartitioning.
Weight Biased Leftist Trees and Modified Skip Lists
 Journal of Experimetnal Algorithmics
, 1996
"... this paper, we are concerned primarily with the insert and delete min operations. The different data structures that have been proposed for the representation of a priority queue differ in terms of the performance guarantees they provide. Some guarantee good performance on a per operation basis whil ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
this paper, we are concerned primarily with the insert and delete min operations. The different data structures that have been proposed for the representation of a priority queue differ in terms of the performance guarantees they provide. Some guarantee good performance on a per operation basis while others do this only in the amortized sense. Heaps permit one to delete the min element and insert an arbitrary element into an n element priority queue in O(logn) time per operation; a find min takes O(1) time. Additionally, a heap is an implicit data structure that has no storage overhead associated with it. All other priority queue structures are pointerbased and so require additional storage for the pointers. Leftist trees also support the insert and delete min operations in O(log n) time per operation and the find min operation in O(1) time. Additionally, they permit us to meld pairs of priority queues in logarithmic time
Parallel Implementation and Performance Characteristics of Least Squares Sparse Approximate Inverse Preconditioners
 Int. J. HighPerform. Comput. Appl
, 2000
"... This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsied matrices. Sparsication is necessary when powers ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsied matrices. Sparsication is necessary when powers of a matrix have a large number of nonzeros, making the approximate inverse computation expensive. For our test problems, the minimum solution time is achieved with approximate inverses with fewer than twice the number of nonzeros of the original matrix. Additional accuracy is not compensated by the increased cost per iteration. The results lead to further understanding of how to use these methods and how well these methods work in practice. In addition, this paper describes programming techniques required for high performance, including onesided communication, local coordinate numbering, and load repartitioning. 1 Introduction A sparse approximate inverse approximates the invers...
Analysis Of Linear Hashing Revisited
, 1998
"... . In this paper we characterize several expansion techniques used for linear hashing and we present how to analyze any linear hashing technique that expands based on local events or that mixes local events and global conditions. As an example we give a very simple randomized expansion technique, whi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. In this paper we characterize several expansion techniques used for linear hashing and we present how to analyze any linear hashing technique that expands based on local events or that mixes local events and global conditions. As an example we give a very simple randomized expansion technique, which is easy to analyze and implement. Furthermore, we obtain the analysis of the original hashing technique devised by Litwin, which was unsolved until now, comparing it to the later and more widely used version of Larson's. We also analyze one hybrid technique. Among other results, it is shown that the control function used by Litwin does not produce a good storage utilization, matching known experimental data. CR Classification: F.2.2, E.5, E.2. Key words: external hashing, linear hashing, analysis of algorithms, optimal bucketing. 1. Introduction External hashing is a very efficient technique used to obtain a fast organization and retrieval of information in big size files whose conten...
DiscreteEvent Simulation on the BulkSynchronous Parallel Model
, 1998
"... The bulksynchronous parallel (BSP) model of computing has been proposed to enable the development of portable software which achieves scalable performance across diverse parallel architectures. A number of applications of computing science have been demonstrated to be efficiently supported by the B ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The bulksynchronous parallel (BSP) model of computing has been proposed to enable the development of portable software which achieves scalable performance across diverse parallel architectures. A number of applications of computing science have been demonstrated to be efficiently supported by the BSP model in practice. In this
On the Pending Event Set and Binary Tournaments
"... this paper we study the performance of the very first tournament based complete binary tree. We focus on discreteevent simulation and our results show that this unknown predecessor of heaps can be a more efficient alternative to the fastest pending event set implementations reported in the literatu ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
this paper we study the performance of the very first tournament based complete binary tree. We focus on discreteevent simulation and our results show that this unknown predecessor of heaps can be a more efficient alternative to the fastest pending event set implementations reported in the literature. We also extend the idea of binary tournaments to a (2; L)tournament structure which exhibits the property of delaying the processing of events with larger timestamps whilst it keeps similar theoretical performance bounds to the native (2; 1)structure or CBT. This property can be certainly useful in systems where many pending events are expected to be deleted or rescheduled during the simulation. 2 Tournament trees
Efficient AgentBased Dissemination of Textual Information
"... Abstract. We study the problem of efficient dissemination of textual information over widearea networks. Our dissemination architecture utilises middleagents and sophisticated matching algorithms. The data model and query language is based on the wellknown Boolean model from Information Retrieval ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We study the problem of efficient dissemination of textual information over widearea networks. Our dissemination architecture utilises middleagents and sophisticated matching algorithms. The data model and query language is based on the wellknown Boolean model from Information Retrieval. The main focus of this paper is the problem of matching incoming documents with submitted user profiles. We present four efficient main memory algorithms for this problem and compare them experimentally. 1