Results 11  20
of
32
AverageCase Complexity of ShortestPaths Problems
, 2001
"... We study both upper and lower bounds on the averagecase complexity of shortestpaths algorithms. It is proved that the allpairs shortestpaths problem on nvertex networks can be solved in time O(n² log n) with high probability with respect to various probability distributions on the set of inputs. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We study both upper and lower bounds on the averagecase complexity of shortestpaths algorithms. It is proved that the allpairs shortestpaths problem on nvertex networks can be solved in time O(n² log n) with high probability with respect to various probability distributions on the set of inputs. Our results include the first theoretical analysis of the average behavior of shortestpaths algorithms with respect to the vertexpotential model, a family of probability distributions on complete networks with arbitrary real arc costs but without negative cycles. We also generalize earlier work with respect to the common uniform model, and we correct the analysis of an algorithm with respect to the endpointindependent model. For the algorithm that solves the allpairs shortestpaths problem on networks generated according to the vertexpotential model, a key ingredient is an algorithm that solves the singlesource shortestpaths problem on such networks in time O(n²) with high probability. All algorithms mentioned exploit that with high probability, the singlesource shortestpaths problem can be solved correctly by considering only a rather sparse subset of the arc set. We prove a lower bound indicating the limitations of this approach. In a fairly general probabilistic model, any algorithm solving the singlesource shortestpaths problem has to inspect# n log n) arcs with high probability.
Putting your data structure on a diet
 In preparation (2006). [Ask Jyrki for details
, 2007
"... Abstract. Consider a data structure D that stores a dynamic collection of elements. Assume that D uses a linear number of words in addition to the elements stored. In this paper several datastructural transformations are described that can be used to transform D into another data structure D ′ that ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. Consider a data structure D that stores a dynamic collection of elements. Assume that D uses a linear number of words in addition to the elements stored. In this paper several datastructural transformations are described that can be used to transform D into another data structure D ′ that supports the same operations as D, has considerably smaller memory overhead than D, and performs the supported operations by a small constant factor or a small additive term slower than D, depending on the data structure and operation in question. The compaction technique has been successfully applied for linked lists, dictionaries, and priority queues.
Melding Priority Queues
 In Proc. of 9th SWAT
, 2004
"... We show that any priority queue data structure that supports insert, delete, and findmin operations in pq(n) time, when n is an upper bound on the number of elements in the priority queue, can be converted into a priority queue data structure that also supports fast meld operations with essentially ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We show that any priority queue data structure that supports insert, delete, and findmin operations in pq(n) time, when n is an upper bound on the number of elements in the priority queue, can be converted into a priority queue data structure that also supports fast meld operations with essentially no increase in the amortized cost of the other operations. More specifically, the new data structure supports insert, meld and findmin operations in O(1) amortized time, and delete operations in O(pq(n) + α(n, n)) amortized time, where α(m, n) is a functional inverse of the Ackermann function. The construction is very simple, essentially just placing a nonmeldable priority queue at each node of a unionfind data structure. We also show that when all keys are integers in the range [1, N], we can replace n in the bound stated above by min{n, N}. Applying this result to nonmeldable priority queue data structures obtained recently by Thorup, and by Han and Thorup, we obtain meldable RAM priority queues with O(log log n) amortized cost per operation, or O ( √ log log n) expected amortized cost per operation, respectively. As a byproduct, we obtain improved algorithms for the minimum directed spanning tree problem in graphs with integer edge weights: A deterministic O(m log log n) time algorithm and a randomized O(m √ log log n) time algorithm. These bounds improve, for sparse enough graphs, on the O(m + n log n) running time of an algorithm by Gabow, Galil, Spencer and Tarjan that works for arbitrary edge weights.
Violation heaps: A better substitute for Fibonacci heaps
, 2008
"... We give a priority queue that achieves the same amortized bounds as Fibonacci heaps. Namely, findmin requires O(1) worstcase time, insert, meld and decreasekey require O(1) amortized time, and deletemin requires O(log n) amortized time. Our structure is simple and promises a more efficient pract ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We give a priority queue that achieves the same amortized bounds as Fibonacci heaps. Namely, findmin requires O(1) worstcase time, insert, meld and decreasekey require O(1) amortized time, and deletemin requires O(log n) amortized time. Our structure is simple and promises a more efficient practical behavior compared to any other known Fibonaccilike heap.
Continuous Monitoring of DistanceBased Outliers over Data Streams
"... Abstract—Anomaly detection is considered an important data mining task, aiming at the discovery of elements (also known as outliers) that show significant diversion from the expected case. More specifically, given a set of objects the problem is to return the suspicious objects that deviate signific ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—Anomaly detection is considered an important data mining task, aiming at the discovery of elements (also known as outliers) that show significant diversion from the expected case. More specifically, given a set of objects the problem is to return the suspicious objects that deviate significantly from the typical behavior. As in the case of clustering, the application of different criteria leads to different definitions for an outlier. In this work, we focus on distancebased outliers: an object x is an outlier if there are less than k objects lying at distance at most R from x. The problem offers significant challenges when a streambased environment is considered, where data arrive continuously and outliers must be detected onthefly. There are a few research works studying the problem of continuous outlier detection. However, none of these proposals meets the requirements of modern streambased applications for the following reasons: (i) they demand a significant storage overhead, (ii) their efficiency is limited and (iii) they lack flexibility. In this work, we propose new algorithms for continuous outlier monitoring in data streams, based on sliding windows. Our techniques are able to reduce the required storage overhead, run faster than previously proposed techniques and offer significant flexibility. Experiments performed on reallife as well as synthetic data sets verify our theoretical study. I.
Efficient CCG Parsing: A * versus Adaptive Supertagging
"... We present a systematic comparison and combination of two orthogonal techniques for efficient parsing of Combinatory Categorial Grammar (CCG). First we consider adaptive supertagging, a widely used approximate search technique that prunes most lexical categories from the parser’s search space using ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a systematic comparison and combination of two orthogonal techniques for efficient parsing of Combinatory Categorial Grammar (CCG). First we consider adaptive supertagging, a widely used approximate search technique that prunes most lexical categories from the parser’s search space using a separate sequence model. Next we consider several variants on A*, a classic exact search technique which to our knowledge has not been applied to more expressive grammar formalisms like CCG. In addition to standard hardwareindependent measures of parser effort we also present what we believe is the first evaluation of A * parsing on the more realistic but more stringent metric of CPU time. By itself, A * substantially reduces parser effort as measured by the number of edges considered during parsing, but we show that for CCG this does not always correspond to improvements in CPU time over a CKY baseline. Combining A * with adaptive supertagging decreases CPU time by 15 % for our best model. 1
On the power of structural violations in priority queues
 In Proc. 13th Computing: The Australasian Theory Symposium., volume 65 of CRPIT
, 2007
"... Abstract. We give a priority queue which guarantees the worstcase cost of Θ(1) per minimum finding, insertion and decrease (often called decreasekey), and the worstcase cost of Θ(lg n) with at most lg n + O( lg n) element comparisons per minimum deletion and deletion. Here, n denotes the number o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We give a priority queue which guarantees the worstcase cost of Θ(1) per minimum finding, insertion and decrease (often called decreasekey), and the worstcase cost of Θ(lg n) with at most lg n + O( lg n) element comparisons per minimum deletion and deletion. Here, n denotes the number of elements stored in the data structure prior to the operation in question, and lg n is a shorthand for max {1, log 2 n}. In contrast to a runrelaxed heap, which allows heaporder violations, our priority queue relies on structural violations. The motivation comes from a recent paper by Kaplan and Tarjan, where they asked whether these two apparently different notions of a violation are equivalent in power. CR Classification. E.1 [Data Structures]: Lists, stacks, and queues; E.2 [Data
RankRelaxed Weak Queues: Faster than Pairing and Fibonacci Heaps?
, 2009
"... A runrelaxed weak queue by Elmasry et al. (2005) is a priority queue data structure with insert and decreasekey in O(1) as well as delete and deletemin in O(log n) worstcase time. One further advantage is the small space consumption of 3n + O(log n) pointers. In this paper we propose rankrelaxe ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A runrelaxed weak queue by Elmasry et al. (2005) is a priority queue data structure with insert and decreasekey in O(1) as well as delete and deletemin in O(log n) worstcase time. One further advantage is the small space consumption of 3n + O(log n) pointers. In this paper we propose rankrelaxed weak queues, reducing the number of rank violations nodes for each level to a constant, while providing amortized constant time for decreasekey. Compared to runrelaxed weak queues, the new structure additionally gains one pointer per node. An empirical evaluation shows that the implementation can outperform Fibonacci and pairing heaps in practice even on rather simple data types.
RankPairing Heaps
"... Abstract. We introduce the rankpairing heap, a heap (priority queue) implementation that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps. Unlike all other heap implementations that match the bounds of Fibonacci heaps, our structure needs only one c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We introduce the rankpairing heap, a heap (priority queue) implementation that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps. Unlike all other heap implementations that match the bounds of Fibonacci heaps, our structure needs only one cut and no other structural changes per key decrease; the trees representing the heap can evolve to have arbitrary structure. Our initial experiments indicate that rankpairing heaps perform almost as well as pairing heaps on typical input sequences and better on worstcase sequences. 1