Results 1  10
of
27
The Complexity of Renaming
"... We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, whe ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetchandincrement registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Ω(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c ≥ 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetchandincrement implementations, all tight within logarithmic factors. 1
A Linear Time Algorithm for the k Maximal Sums Problem
"... Abstract. Finding the subvector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k subvectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm f ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Finding the subvector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k subvectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm for the k maximal sums problem. We use this algorithm to obtain algorithms solving the twodimensional k maximal sums problem in O(m 2 ·n+k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the ddimensional problem in O(n 2d−1 +k) time. The space usage of all the algorithms can be reduced to O(n d−1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space. 1
OptimalTime Adaptive Strong Renaming, with Applications to Counting (Extended Abstract)
 PODC 2011, SAN JOSE USA
, 2011
"... We give two new randomized algorithms for strong renaming, both of which work against an adaptive adversary in asynchronous shared memory. The first uses repeated sampling over a sequence of arrays of decreasing size to assign unique names to each of n processes with step complexity O(log³ n). The s ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We give two new randomized algorithms for strong renaming, both of which work against an adaptive adversary in asynchronous shared memory. The first uses repeated sampling over a sequence of arrays of decreasing size to assign unique names to each of n processes with step complexity O(log³ n). The second transforms any sorting network into a strong adaptive renaming protocol, with an expected cost equal to the depth of the sorting network. Using an AKS sorting network, this gives a strong adaptive renaming algorithm with step complexity O(log k), where k is the contention in the current execution. We show this to be optimal based on a classic lower bound of Jayanti. We also show that any such strong renaming protocol can be used to build a monotoneconsistent counter with logarithmic step complexity (at the cost of adding a max register) or a linearizable fetchandincrement register (at the cost of increasing the step complexity by a logarithmic factor).
Proof Based Synthesis of Sorting Algorithms Authors:
"... July, 2010We present some case studies in constructive synthesis of sorting algorithms. In order to synthesize some algorithms on tuples (like e. g. insertionsort, mergesort) we use an approach based on proving. Namely, we start from the specification of the problem (input and output condition) an ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
July, 2010We present some case studies in constructive synthesis of sorting algorithms. In order to synthesize some algorithms on tuples (like e. g. insertionsort, mergesort) we use an approach based on proving. Namely, we start from the specification of the problem (input and output condition) and we construct an inductive proof of the fact that for each input there exists a solution which satisfies the output condition. The problem will be reduced into smaller and smaller problems, the method will be applied like in a ”cascade” and finally the problem is so simple that the corresponding algorithm (function) already exists in the knowledge. The algorithm can be then extracted immediately from the proof. These experiments are paralleled with the exploration of the appropriate theory of tuples. The purpose of these experiments is multifold: to construct the appropriate knowledge base necessary for this type of proofs, to find the natural deduction inference rules and the necessary strategies for their application, and finally to implement the corresponding provers in the frame of the Theorema system. The novel specific feature of our approach is applying this method like in a ”cascade”
Evolving efficient recursive sorting algorithms
 in Proceedings of the 2006 IEEE Congress on Evolutionary Computation
, 2006
"... is applied to the task of evolving general recursive sorting algorithms. We studied the effects of language primitives and fitness functions on the success of the evolutionary process. For language primitives, these were the methods of a simple list processing package. Five different fitness functio ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
is applied to the task of evolving general recursive sorting algorithms. We studied the effects of language primitives and fitness functions on the success of the evolutionary process. For language primitives, these were the methods of a simple list processing package. Five different fitness functions based on sequence disorder were evaluated. The time complexity of the successfully evolved algorithms was measured experimentally in terms of the number of method invocations made, and for the best evolved individuals this was best approximated as O(n × log(n)). This is the first time that sorting algorithms of this complexity have been evolved. I.
OutofCore Selection and Editing of Huge Point Clouds
"... In this paper we present an outofcore editing system for point clouds, which allows selecting and modifying arbitrary parts of a huge point cloud interactively. We can use the selections to segment the point cloud, to delete points, or to render a preview of the model without the points in the sel ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we present an outofcore editing system for point clouds, which allows selecting and modifying arbitrary parts of a huge point cloud interactively. We can use the selections to segment the point cloud, to delete points, or to render a preview of the model without the points in the selections. Furthermore we allow for inserting points into an already existing point cloud. All operations are conducted on a rendering optimized data structure that uses the raw point cloud from a laser scanner, and no additionally created points are needed for an efficient levelofdetail (LOD) representation using this data structure. We also propose an algorithm to alleviate the artifacts when rendering a point cloud with large discrepancies in density in different areas by estimating point sizes heuristically. These estimated point sizes can be used to mimic a closed surface on the raw point cloud, also when the point cloud is composed of several raw laser scans. Keywords: Pointbased rendering, Viewing algorithms, Graphics data structures and data types 1.
Case Studies in Systematic Exploration of Tuple Theory
, 2010
"... Abstract. We illustrate with concrete examples the systematic exploration of Tuple Theory in a bottomup and topdown way. In the bottomup exploration we start from two axioms, add new notions and in this way we build the theory, check all the new notions introduced and prove some of them by the ne ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We illustrate with concrete examples the systematic exploration of Tuple Theory in a bottomup and topdown way. In the bottomup exploration we start from two axioms, add new notions and in this way we build the theory, check all the new notions introduced and prove some of them by the new prover which we created in the TH∃OREM ∀ system. In order to synthesize some algorithms on tuples (like e. g. insertionsort) we use an approach based on proving. Namely, we start from the specification of the problem (input and output condition) and we construct an inductive proof of the fact that for each input there exists a solution which satisfies the output condition. The problem will be reduced to smaller problems, the method will be applied like in a ”cascade” and finally the problem is so simple that the corresponding algorithm (function) already exists in the knowledge. The algorithm can be then extracted immediately from the proof. We present an experiment on synthesis of the insertionsort algorithm on tuples, based on the proof existence of the solution. This experiment is paralleled with the construction (exploration) of the appropriate theory of tuples. The main purpose of this research is to concretely construct examples of theories and to reveal the typical activities which occur in theory exploration, in the context of a specific application – in this case algorithm synthesis by proving. 1
Parallel Retrieval of Dense Vectors in the Vector Space Model
"... Modern information retrieval systems use distributed and parallel algorithms to meet their operational requirements, and commonly operate on sparse vectors. But dimensionalityreducing techniques produce dense and relatively short feature vectors. Motivated by this relevance of dense vectors, we hav ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Modern information retrieval systems use distributed and parallel algorithms to meet their operational requirements, and commonly operate on sparse vectors. But dimensionalityreducing techniques produce dense and relatively short feature vectors. Motivated by this relevance of dense vectors, we have parallelized the vector space model for dense matrices and vectors. Our algorithm uses a hybrid partitioning splitting documents and features and operates on a mesh of hosts holding a block partitioned corpus matrix. We show that the theoretic speedup is optimal. The empirical evaluation of an MPIbased implementation reveals that we obtain a superlinear speedup on a cluster using Nehalem Xeon CPUs.
An Experimental Study of Sorting and Branch Prediction
"... Sorting is one of the most important and well studied problems in Computer Science. Many good algorithms are known which offer various tradeoffs in efficiency, simplicity, memory use, and other factors. However, these algorithms do not take into account features of modern computer architectures tha ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Sorting is one of the most important and well studied problems in Computer Science. Many good algorithms are known which offer various tradeoffs in efficiency, simplicity, memory use, and other factors. However, these algorithms do not take into account features of modern computer architectures that significantly influence performance. Caches and branch predictors are two such features, and while there has been a significant amount of research into the cache performance of general purpose sorting algorithms, there has been little research on their branch prediction properties. In this paper we empirically examine the behaviour of the branches in all the most common sorting algorithms. We also consider the interaction of cache optimization on the predictability of the branches in these algorithms. We find insertion sort to have the fewest branch mispredictions of any comparisonbased sorting algorithm, that bubble and shaker sort operate in a fashion which makes their branches highly unpredictable, that the unpredictability of shellsort’s branches improves its caching behaviour and that several cache optimizations have little effect on mergesort’s branch mispredictions. We find also that optimizations to quicksort – for example the choice of pivot – have a strong influence on the predictability of its branches. We point out a simple way of removing branch instructions from a classic heapsort implementation, and show also that unrolling a loop in a cache optimized heapsort implementation improves the predicitability of its branches. Finally, we note that when sorting random data twolevel adaptive branch predictors are usually no better than simpler bimodal predictors. This is despite the fact that twolevel adaptive predictors are almost always superior to bimodal predictors in general.