Results 1 
3 of
3
Using Difficulty of Prediction to Decrease Computation: Fast Sort, Priority Queue and Convex Hull on Entropy Bounded Inputs
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for online algorithms, e.g., cashi ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for online algorithms, e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and vise versa), and showed that online algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will ap proximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge,
unknown title
"... let P denote the processor bound, and T denote the time bound ofa parallel algorithm for a given problem, the product PT is, clearly, lower bounded by the minimum sequential time, Ts, required to solve this problem. We say a parallel algorithm is optimal ifPT O(Ts). Discovering optimal parallel algo ..."
Abstract
 Add to MetaCart
let P denote the processor bound, and T denote the time bound ofa parallel algorithm for a given problem, the product PT is, clearly, lower bounded by the minimum sequential time, Ts, required to solve this problem. We say a parallel algorithm is optimal ifPT O(Ts). Discovering optimal parallel algorithms for sorting both general and integer keys remained an open problem for a long time. Reischuk [25] proposed a randomized parallel algorithm that used n synchronous PRAM processors to sort n general keys in O(log n) time. This algorithm, however, is impracticalowing to its large wordlength requirements. Reifand Valiant [24] presented a randomized sorting algorithm that ran on a fixedconnection network called cubeconnected cycles (CCC). This algorithm employed n processors to sort n general keys in time O(log n). Since f(n log n) is a sequential lower bound for this problem, their algorithm is indeed optimal. Simultaneously, Atai, Koml6s, and Szemer6di [4] discovered a deterministic parallel algorithm for sorting n general keys in time O(log n) using a sorting network of O(n log n) processors. Later, Leighton [17] showed that this algorithm could be modified to run in O(log n) time on an nnode fixedconnection
Using Learning and Difficulty of Prediction to Decrease Computation: A Fast Sort and Priority Queue on Entropy Bounded Inputs
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently, (e.g. see [Vitter, Krishnan, FOCS91], [Karlin, Philips, Raghavan, FOCS92] [Raghavan92]) for use of Markov models for online algor ..."
Abstract
 Add to MetaCart
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently, (e.g. see [Vitter, Krishnan, FOCS91], [Karlin, Philips, Raghavan, FOCS92] [Raghavan92]) for use of Markov models for online algorithms e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and vise versa), and show that online algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will approximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge, this is first case of a computational problem where we do not assume any particular fixed input distribution and yet computation is decreased when the input is less predictable, rather than the reverse. We concentrate our investigation on a basic computational problem: sorting and a basic data structure problem: maintaining a priority queue. We present the first known case of sorting and priority queue algorithms whose complexity depends on the binary entropy H ≤ 1 of input keys where assume that input keys are generated from an unknown but arbitrary stationary ergodic source. This is, we assume that each of the input keys can be each arbitrarily long, but have entropy H. Note that H can be