Results 1  10
of
266
VariableRate VariablePower MQAM for Fading Channels
 IEEE Trans. Commun
, 1997
"... We propose a variablerate and variablepower MQAM modulation scheme for highspeed data transmission over fading channels. We first review results for the Shannon capacity of fading channels with channel side information, where capacity is achieved using adaptive transmission techniques. We then de ..."
Abstract

Cited by 316 (30 self)
 Add to MetaCart
We propose a variablerate and variablepower MQAM modulation scheme for highspeed data transmission over fading channels. We first review results for the Shannon capacity of fading channels with channel side information, where capacity is achieved using adaptive transmission techniques. We then derive the spectral efficiency of our proposed modulation. We show that there is a constant power gap between the spectral efficiency of our proposed technique and the channel capacity, and this gap is a simple function of the required biterror rate (BER). In addition, using just five or six different signal constellations, we achieve within 12 dB of the maximum efficiency using unrestricted constellation sets. We compute the rate at which the transmitter needs to update its power and rate as a function of the channel Doppler frequency for these constellation sets. We also obtain the exact efficiency loss for smaller constellation sets, which may be required if the transmitter adaptation rate is constrained by hardware limitations. Our modulation scheme exhibits a 510dB power gain relative to variablepower fixedrate transmission, and up to 20 dB of gain relative to nonadaptive transmission. We also determine the effect of channel estimation error and delay on the BER performance of our adaptive scheme. We conclude with a discussion of coding techniques and the relationship between our proposed modulation and Shannon capacity.
Algorithms for Parallel Memory I: TwoLevel Memories
, 1992
"... We provide the first optimal algorithms in terms of the number of input/outputs (I/Os) required between internal memory and multiple secondary storage devices for the problems of sorting, FFT, matrix transposition, standard matrix multiplication, and related problems. Our twolevel memory model is n ..."
Abstract

Cited by 236 (32 self)
 Add to MetaCart
We provide the first optimal algorithms in terms of the number of input/outputs (I/Os) required between internal memory and multiple secondary storage devices for the problems of sorting, FFT, matrix transposition, standard matrix multiplication, and related problems. Our twolevel memory model is new and gives a realistic treatment of parallel block transfer, in which during a single I/O each of the P secondary storage devices can simultaneously transfer a contiguous block of B records. The model pertains to a largescale uniprocessor system or parallel multiprocessor system with P disks. In addition, the sorting, FFT, permutation network, and standard matrix multiplication algorithms are typically optimal in terms of the amount of internal processing time. The difficulty in developing optimal algorithms is to cope with the partitioning of memory into P separate physical devices. Our algorithms' performance can be significantly better than those obtained by the wellknown but nonopti...
The Power of Two Choices in Randomized Load Balancing
 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
, 1996
"... Suppose that n balls are placed into n bins, each ball being placed into a bin chosen independently and uniformly at random. Then, with high probability, the maximum load in any bin is approximately log n log log n . Suppose instead that each ball is placed sequentially into the least full of d ..."
Abstract

Cited by 201 (23 self)
 Add to MetaCart
Suppose that n balls are placed into n bins, each ball being placed into a bin chosen independently and uniformly at random. Then, with high probability, the maximum load in any bin is approximately log n log log n . Suppose instead that each ball is placed sequentially into the least full of d bins chosen independently and uniformly at random. It has recently been shown that the maximum load is then only log log n log d +O(1) with high probability. Thus giving each ball two choices instead of just one leads to an exponential improvement in the maximum load. This result demonstrates the power of two choices, and it has several applications to load balancing in distributed systems. In this thesis, we expand upon this result by examining related models and by developing techniques for stu...
Efficient replica maintenance for distributed storage systems
 In Proc. of NSDI
, 2006
"... This paper considers replication strategies for storage systems that aggregate the disks of many nodes spread over the Internet. Maintaining replication in such systems can be prohibitively expensive, since every transient network or host failure could potentially lead to copying a server’s worth of ..."
Abstract

Cited by 88 (17 self)
 Add to MetaCart
This paper considers replication strategies for storage systems that aggregate the disks of many nodes spread over the Internet. Maintaining replication in such systems can be prohibitively expensive, since every transient network or host failure could potentially lead to copying a server’s worth of data over the Internet to maintain replication levels. The following insights in designing an efficient replication algorithm emerge from the paper’s analysis. First, durability can be provided separately from availability; the former is less expensive to ensure and a more useful goal for many widearea applications. Second, the focus of a durability algorithm must be to create new copies of data objects faster than permanent disk failures destroy the objects; careful choice of policies for what nodes should hold what data can decrease repair time. Third, increasing the number of replicas of each data object does not help a system tolerate a higher disk failure probability, but does help tolerate bursts of failures. Finally, ensuring that the system makes use of replicas that recover after temporary failure is critical to efficiency. Based on these insights, the paper proposes the Carbonite replication algorithm for keeping data durable at a low cost. A simulation of Carbonite storing 1 TB of data over a 365 day trace of PlanetLab activity shows that Carbonite is able to keep all data durable and uses 44 % more network traffic than a hypothetical system that only responds to permanent failures. In comparison, Total Recall and DHash require almost a factor of two more network traffic than this hypothetical system. 1
Classifying scheduling policies with respect to unfairness in an M/GI/1
 Proc. of SIGMETRICS’03
, 2003
"... It is common to classify scheduling policies based on their mean response times. Another important, but sometimes opposing, performance metric is a scheduling policy’s fairness. For example, a policy that biases towards short jobs so as to minimize mean response time, may end up being unfair to long ..."
Abstract

Cited by 87 (15 self)
 Add to MetaCart
It is common to classify scheduling policies based on their mean response times. Another important, but sometimes opposing, performance metric is a scheduling policy’s fairness. For example, a policy that biases towards short jobs so as to minimize mean response time, may end up being unfair to long jobs. In this paper we define three types of unfairness and demonstrate large classes of scheduling policies that fall into each type. We end with a discussion on which jobs are the ones being treated unfairly. 1
CapProbe: a Simple and Accurate Capacity Estimation Technique
 in Proc. ACM SIGCOMM
, 2004
"... The problem of estimating the capacity of an Internet path is one of fundamental importance. Due to the multitude of potential applications, a large number of solutions have been proposed and evaluated. The proposed solutions so far have been successful in partially addressing the problem, but have ..."
Abstract

Cited by 80 (18 self)
 Add to MetaCart
The problem of estimating the capacity of an Internet path is one of fundamental importance. Due to the multitude of potential applications, a large number of solutions have been proposed and evaluated. The proposed solutions so far have been successful in partially addressing the problem, but have suffered from being slow, obtrusive or inaccurate. In this work, we evaluate CapProbe, a lowcost and accurate endtoend capacity estimation scheme that relies on packet dispersion techniques as well as endtoend delays. The key observation that enabled the development of CapProbe is that both compression and expansion of packet pair dispersion are the result of queuing due to crosstraffic. By filtering out queuing effects from packet pair samples, CapProbe is able to estimate capacity accurately in most environments, with minimal processing and probing traffic overhead. In fact, the storage and processing requirements of CapProbe are orders of magnitude smaller than most of the previously proposed schemes. We tested CapProbe through simulation, Internet, Internet2 and wireless experiments. We found that CapProbe error percentage in capacity estimation was within 10 % in almost all cases, and within 5 % in most cases.
A VehicletoVehicle Communication Protocol for Cooperative Collision Warning
, 2004
"... This paper proposes a vehicletovehicle communication protocol for cooperative collision warning. Emerging wireless technologies for vehicletovehicle (V2V) and vehicletoroadside (V2R) communications such as DSRC [1] are promising to dramatically reduce the number of fatal roadway accidents by pr ..."
Abstract

Cited by 78 (0 self)
 Add to MetaCart
This paper proposes a vehicletovehicle communication protocol for cooperative collision warning. Emerging wireless technologies for vehicletovehicle (V2V) and vehicletoroadside (V2R) communications such as DSRC [1] are promising to dramatically reduce the number of fatal roadway accidents by providing early warnings. One major technical challenge addressed in this paper is to achieve lowlatency in delivering emergency warnings in various road situations. Based on a careful analysis of application requirements, we design an effective protocol, comprising congestion control policies, service differentiation mechanisms and methods for emergency warning dissemination. Simulation results demonstrate that the proposed protocol achieves low latency in delivering emergency warnings and efficient bandwidth usage in stressful road scenarios. 1.
Selfish Traffic Allocation for Server Farms
, 2003
"... We study the price of selfish routing in noncooperative networks like the Internet. In particular, we investigate the price... ..."
Abstract

Cited by 77 (5 self)
 Add to MetaCart
We study the price of selfish routing in noncooperative networks like the Internet. In particular, we investigate the price...
An Analytic Behavior Model for Disk Drives With Readahead Caches and Request Reordering
, 1998
"... Modern disk drives readahead data and reorder incoming requests in a workloaddependent fashion. This improves their performance, but makes simple analytical models of them inadequate for performance prediction, capacity planning, workload balancing, and so on. To address this problem we have devel ..."
Abstract

Cited by 66 (8 self)
 Add to MetaCart
Modern disk drives readahead data and reorder incoming requests in a workloaddependent fashion. This improves their performance, but makes simple analytical models of them inadequate for performance prediction, capacity planning, workload balancing, and so on. To address this problem we have developed a new analytic model for disk drives that do readahead and request reordering. We did so by developing performance models of the disk drive components (queues, caches, and the disk mechanism) and a workload transformation technique for composing them. Our model includes the effects of workloadspecific parameters such as request size and spatial locality. The result is capable of predicting the behavior of a variety of realworld devices to within 17% across a variety of workloads and disk drives.
On the Analysis of Randomized Load Balancing Schemes
 IN PROCEEDINGS OF THE 9TH ANNUAL ACM SYMPOSIUM ON PARALLEL ALGORITHMS AND ARCHITECTURES
, 1998
"... It is well known that simple randomized load balancing schemes can balance load effectively while incurring only a small overhead, making such schemes appealing for practical systems. In this paper, we provide new analyses for several such dynamic randomized load balancing schemes. Our work extends ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
It is well known that simple randomized load balancing schemes can balance load effectively while incurring only a small overhead, making such schemes appealing for practical systems. In this paper, we provide new analyses for several such dynamic randomized load balancing schemes. Our work extends a previous analysis of the supermarket model, a model that abstracts a simple, efficient load balancing scheme in the setting where jobs arrive at a large system of parallel processors. In this model, customers arrive at a system of n servers as a Poisson stream of rate #n, # < 1, with service requirements exponentially distributed with mean 1. Each customer chooses d servers independently and uniformly at random from the n servers, and is served according to the First In First Out (FIFO) protocol at the choice with the fewest customers. For the supermarket model, it has been shown that using d = 2 choices yields an exponential improvement in the expected time a customer spends in the syst...