Results 1  10
of
21
Exploiting semantic proximity in peertopeer content searching
 In 10th International Workshop on Future Trends in Distributed Computing Systems (FTDCS 2004), Suzhu
, 2004
"... A lot of recent work has dealt with improving performance of content searching in peertopeer file sharing systems. In this paper we attack this problem by modifying the overlay topology describing the peer relations in the system. More precisely, we create a semantic overlay, linking nodes that ar ..."
Abstract

Cited by 58 (12 self)
 Add to MetaCart
(Show Context)
A lot of recent work has dealt with improving performance of content searching in peertopeer file sharing systems. In this paper we attack this problem by modifying the overlay topology describing the peer relations in the system. More precisely, we create a semantic overlay, linking nodes that are “semantically close”, by which we mean that they are interested in similar documents. This semantic overlay provides the primary search mechanism, while the initial peertopeer system provides the failover search mechanism. We focus on implicit approaches for discovering semantic proximity. We evaluate and compare three candidate methods, and review open questions. 1.
Asymptotic approximation of the movetofront search cost distribution and leastrecentlyused caching fault probabilities
, 1999
"... Consider a finite list of items n = 1 � 2 � � � � � N, that are requested according to an i.i.d. process. Each time an item is requested it is moved to the front of the list. The associated search cost C N for accessing an item is equal to its position before being moved. If the request distribu ..."
Abstract

Cited by 42 (8 self)
 Add to MetaCart
Consider a finite list of items n = 1 � 2 � � � � � N, that are requested according to an i.i.d. process. Each time an item is requested it is moved to the front of the list. The associated search cost C N for accessing an item is equal to its position before being moved. If the request distribution converges to a proper distribution as N → ∞, then the stationary search cost C N converges in distribution to a limiting search cost C. We show that, when the (limiting) request distribution has a heavy tail (e.g., generalized Zipf’s law), P�R = n � ∼ c/n α as n → ∞, α> 1, then the limiting stationary search cost distribution P�C> n�, or, equivalently, the leastrecently used (LRU) caching fault probability, satisfies P�C> n� lim n→ ∞ P�R> n � =
SelfOrganizing Linear Search
 ACM Computing Surveys
, 1985
"... this article. Two examples of simple permutation algorithms are movetofront, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in th ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
this article. Two examples of simple permutation algorithms are movetofront, which moves the accessed record to the front of the list, shifting all records previously ahead of it back one position; and transpose, which merely exchanges the accessed record with the one immediately ahead of it in the list. These will be described in more detail later. Knuth [1973] describes several search methods that are usually more efficient than linear search. Bentley and McGeoch [1985] justify the use of selforganizing linear search in the following three contexts:
Average Case Analyses of List Update Algorithms, with Applications to Data Compression
 Algorithmica
, 1998
"... We study the performance of the Timestamp (0) (TS(0)) algorithm for selforganizing sequential search on discrete memoryless sources. We demonstrate that TS(0) is better than Movetofront on such sources, and determine performance ratios for TS(0) against the optimal offline and static adversaries ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
We study the performance of the Timestamp (0) (TS(0)) algorithm for selforganizing sequential search on discrete memoryless sources. We demonstrate that TS(0) is better than Movetofront on such sources, and determine performance ratios for TS(0) against the optimal offline and static adversaries in this situation. Previous work on such sources compared online algorithms only with static adversaries. One practical motivation for our work is the use of the Movetofront heuristic in various compression algorithms. Our theoretical results suggest that in many cases using TS(0) in place of Movetofront in schemes that use the latter should improve compression. Tests using implementations on a standard corpus of test documents demonstrate that TS(0) leads to improved compression.
SelfOrganizing Data Structures
 In
, 1998
"... . We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competit ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
. We survey results on selforganizing data structures for the search problem and concentrate on two very popular structures: the unsorted linear list, and the binary search tree. For the problem of maintaining unsorted lists, also known as the list update problem, we present results on the competitiveness achieved by deterministic and randomized online algorithms. For binary search trees, we present results for both online and offline algorithms. Selforganizing data structures can be used to build very effective data compression schemes. We summarize theoretical and experimental results. 1 Introduction This paper surveys results in the design and analysis of selforganizing data structures for the search problem. The general search problem in pointer data structures can be phrased as follows. The elements of a set are stored in a collection of nodes. Each node also contains O(1) pointers to other nodes and additional state data which can be used for navigation and selforganizati...
Inner product spaces for minsum coordination mechanisms
 In STOC
, 2011
"... We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obta ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain the approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. The method entails mapping strategy vectors into a carefully chosen inner product space; costs are shown to correspond to the norm in this space, and the Nash condition also has a simple description. With this structure in place, we are able to prove a number of results, as follows. First, we consider Smith’s Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic nonpreemptive strongly local policies. Since Smith’s Rule is always optimal for a given fixed assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed.
Two New Families of List Update Algorithms
 In ISSAC'98, LCNS 1533
, 1998
"... . We consider the online list accessing problem and present a new family of competitiveoptimal deterministic list update algorithms which is the largest class of such algorithms known todate. This family, called SortbyRank (sbr), is parametrized with a real 0 ff 1, where sbr(0) is the Move ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
. We consider the online list accessing problem and present a new family of competitiveoptimal deterministic list update algorithms which is the largest class of such algorithms known todate. This family, called SortbyRank (sbr), is parametrized with a real 0 ff 1, where sbr(0) is the MovetoFront algorithm and sbr(1) is equivalent to the Timestamp algorithm. The behaviour of sbr(ff) mediates between the eager strategy of MovetoFront and the more conservative behaviour of Timestamp. We also present a family of algorithms SortbyDelay (sbd) which is parametrized by the positive integers, where sbd(1) is MovetoFront and sbd(2) is equivalent to Timestamp. In general, sbd(k) is kcompetitive for k 2. This is the first class of algorithms that is asymptotically optimal for independent, identically distributed requests while each algorithm is constantcompetitive. Empirical studies with with both generated and realworld data are also included. 1 Introduction Co...
OnLine Algorithms: Competitive Analysis and Beyond
 In Algorithms and Theory of Computation Handbook
, 1999
"... ..."
(Show Context)
On list update with locality of reference
 In Proc. ICALP
, 2008
"... We present a comprehensive study of the list update problem with locality of reference. More specifically, we present a combined theoretical and experimental study in which the theoretically proven and experimentally observed performance guarantees of algorithms match or nearly match. In the first p ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We present a comprehensive study of the list update problem with locality of reference. More specifically, we present a combined theoretical and experimental study in which the theoretically proven and experimentally observed performance guarantees of algorithms match or nearly match. In the first part of the paper we introduce a new model of locality of reference that is based on the natural concept of runs. Using this model we develop refined theoretical analyses of popular list update algorithms. The second part of the paper is devoted to an extensive experimental study in which we have tested the algorithms on traces from benchmark libraries. It shows that the theoretical and experimental bounds differ by just a few percent. Our new bounds are substantially lower than those provided by standard competitive analysis. Another result is that the elegant MoveToFront strategy exhibits the best performance, which confirms that it is the method of choice in practice. 1
Selforganizing data structures with dependent accesses
 ICALP'96, LNCS 1099
, 1995
"... We consider selforganizing data structures in the case where the sequence of accesses can be modeled by a first order Markov chain. For the simplek and batchedkmovetofront schemes, explicit formulae for the expected search costs are derived and compared. We use a new approach that employs th ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We consider selforganizing data structures in the case where the sequence of accesses can be modeled by a first order Markov chain. For the simplek and batchedkmovetofront schemes, explicit formulae for the expected search costs are derived and compared. We use a new approach that employs the technique of expanding a Markov chain. This approach generalizes the results of Gonnet/Munro/Suwanda. In order to analyze arbitrary memoryfree moveforward heuristics for linear lists, we restrict our attention to a special access sequence, thereby reducing the state space of the chain governing the behaviour of the data structure. In the case of accesses with locality (inert transition behaviour), we find that the hierarchies of selforganizing data structures with respect to the expected search time are reversed, compared with independent accesses. Finally we look at selforganizing binary trees with the movetoroot rule and compare the expected search cost with the entropy of the Markov chain of accesses.