Results 1  10
of
319
On the capacity of MIMO broadcast channel with partial side information
 IEEE Trans. Inform. Theory
, 2005
"... Abstract—In multipleantenna broadcast channels, unlike pointtopoint multipleantenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with transmit antennas and singleantenna use ..."
Abstract

Cited by 173 (6 self)
 Add to MetaCart
Abstract—In multipleantenna broadcast channels, unlike pointtopoint multipleantenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with transmit antennas and singleantenna users, the sum rate capacity scales like log log for large if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with if it is not. In systems with large, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. In this paper, we propose a scheme that constructs random beams and that transmits information to the users with the highest signaltonoiseplusinterference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed and increasing, the throughput of our scheme scales as log log, where is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with can be obtained provided that does not not grow faster than log. We also study the fairness of our scheduling in a heterogeneous network and show that, when is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1, irrespective of its path loss. In fact, using = log transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with as well as in guaranteeing fairness. Index Terms—Broadcast channel, channel state information (CSI), multiuser diversity, wireless communications. I.
Improved Approximation Algorithms for MAX kCUT and MAX BISECTION
, 1994
"... Polynomialtime approximation algorithms with nontrivial performance guarantees are presented for the problems of (a) partitioning the vertices of a weighted graph into k blocks so as to maximise the weight of crossing edges, and (b) partitioning the vertices of a weighted graph into two blocks ..."
Abstract

Cited by 162 (0 self)
 Add to MetaCart
Polynomialtime approximation algorithms with nontrivial performance guarantees are presented for the problems of (a) partitioning the vertices of a weighted graph into k blocks so as to maximise the weight of crossing edges, and (b) partitioning the vertices of a weighted graph into two blocks of equal cardinality, again so as to maximise the weight of crossing edges. The approach, pioneered by Goemans and Williamson, is via a semidefinite relaxation. 1 Introduction Goemans and Williamson [5] have significantly advanced the theory of approximation algorithms. Previous work on approximation algorithms was largely dependent on comparing heuristic solution values to that of a Linear Program (LP) relaxation, either implicitly or explicitly. This was recognised some time ago by Wolsey [11]. (One significant exception to this general rule has been the case of Bin Packing.) The main novelty of [5] is that it uses a SemiDefinite Program (SDP) as a relaxation. To be more precise let...
On the optimality of multiantenna broadcast scheduling using zeroforcing beamforming
 IEEE J. SELECT. AREAS COMMUN
, 2006
"... Although the capacity of multipleinput/multipleoutput (MIMO) broadcast channels (BCs) can be achieved by dirty paper coding (DPC), it is difficult to implement in practical systems. This paper investigates if, for a large number of users, simpler schemes can achieve the same performance. Specifica ..."
Abstract

Cited by 116 (5 self)
 Add to MetaCart
Although the capacity of multipleinput/multipleoutput (MIMO) broadcast channels (BCs) can be achieved by dirty paper coding (DPC), it is difficult to implement in practical systems. This paper investigates if, for a large number of users, simpler schemes can achieve the same performance. Specifically, we show that a zeroforcing beamforming (ZFBF) strategy, while generally suboptimal, can achieve the same asymptotic sum capacity as that of DPC, as the number of users goes to infinity. In proving this asymptotic result, we provide an algorithm for determining which users should be active under ZFBF. These users are semiorthogonal to one another and can be grouped for simultaneous transmission to enhance the throughput of scheduling algorithms. Based on the user grouping, we propose and compare two fair scheduling schemes in roundrobin ZFBF and proportionalfair ZFBF. We provide numerical results to confirm the optimality of ZFBF and to compare the performance of ZFBF and proposed fair scheduling schemes with that of various MIMO BC strategies.
Adaptive Wavelength Routing in AllOptical Networks
 IEEE/ACM Transactions on Networking
, 1997
"... In this paper, we consider routing and wavelength assignment in wavelengthrouted alloptical networks with circuitswitching. The conventional approaches to address this issue consider the two aspects of the problem sequentially by first finding a route from a predetermined set of candidate paths an ..."
Abstract

Cited by 89 (0 self)
 Add to MetaCart
In this paper, we consider routing and wavelength assignment in wavelengthrouted alloptical networks with circuitswitching. The conventional approaches to address this issue consider the two aspects of the problem sequentially by first finding a route from a predetermined set of candidate paths and then searching for an appropriate wavelength assignment. We adopt a more general approach in which we consider all paths between a sourcedestination pair and incorporate network state information into the routing decision. This approach performs routing and wavelength assignment jointly and adaptively, and outperforms fixed routing techniques. We present adaptive routing and wavelength assignment algorithms and evaluate their blocking performance. We obtain an algorithm to compute approximate blocking probabilities for networks employing fixed and alternate routing techniques. That algorithm can also accommodate networks with multiple fibers per link. The blocking performance of the propo...
Group reaction time distributions and an analysis of distribution statistics
 Psychological Bulletin
, 1979
"... A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subj ..."
Abstract

Cited by 77 (22 self)
 Add to MetaCart
A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subject are organized in ascending order, and quantiles are calculated. The quantiles are then averaged over subjects to give group quantiles (cf. Vincent learning curves). From the group quantiles, a group reaction time distribution can be constructed. It is shown that this method of averaging is exact for certain distributions (i.e., the resulting distribution belongs to the same family as the individual distributions). Furthermore, Monte Carlo studies and application of the method to the combined data from three large experiments provide evidence that properties derived from the group reaction time distribution are much the same as average properties derived from the data of individual subjects. This article also examines how to quantitatively describe the shape of reaction time distributions. The use of moments and cumulants as sources of information about distribution shape is evaluated and rejected because of extreme dependence on long, outlier reaction times. As an alternative, the use of explicit distribution functions as approximations to reaction time distributions is considered. Despite the recent popularity of reaction time research, the use of reaction time distributions for both model testing and model development has been largely ignored. This is surprising in view of the fact that properties of distributions can prove decisive in discriminating among models i(Sternberg, Note 1) and can falsify models that quite adequately describe the behavior of mean reaction time (Ratcliff & Murdock, 1976). Two methods have been used to obtain distributional or shape information. One
Longlasting transient conditions in simulations with heavytailed workloads
, 1997
"... Recent evidence suggests that some characteristics of computer and telecommunications systems may be well described using heavy tailed distributions — distributions whose tail declines like a power law, which means that the probability of extremely large observations is nonnegligible. For example, ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Recent evidence suggests that some characteristics of computer and telecommunications systems may be well described using heavy tailed distributions — distributions whose tail declines like a power law, which means that the probability of extremely large observations is nonnegligible. For example, such distributions have been found to describe the lengths of bursts in network traffic and the sizes of files in some systems. As a result, system designers are increasingly interested in employing heavytailed distributions in simulation workloads. Unfortunately, these distributions have properties considerably different from the kinds of distributions more commonly used in simulations; these properties make simulation stability hard to achieve. In this paper we explore the difficulty of achieving stability in such simulations, using tools from the theory of stable distributions. We show that such simulations exhibit two characteristics related to stability: slow convergence to steady state, and high variability at steady state. As a result, we argue that such simulations must be treated as effectively always in a transient condition. One way to address this problem is to introduce the notion of time scale as a parameter of the simulation, and we discuss methods for simulating such systems while explicitly incorporating time scale as a parameter. 1
Parallel Performance Prediction using Lost Cycles Analysis
 IN PROCEEDINGS OF SUPERCOMPUTING '94
, 1994
"... Most performance debugging and tuning of parallel programs is based on the "measuremodify" approach, which is heavily dependent on detailed measurements of programs during execution. This approach is extremely timeconsuming and does not lend itself to predicting performance under varying condition ..."
Abstract

Cited by 66 (1 self)
 Add to MetaCart
Most performance debugging and tuning of parallel programs is based on the "measuremodify" approach, which is heavily dependent on detailed measurements of programs during execution. This approach is extremely timeconsuming and does not lend itself to predicting performance under varying conditions. Analytic modeling and scalability analysis provide predictive power, but are not widely used inpractice, due primarily to their emphasis on asymptotic behavior and the difficulty of developing accurate models that work for realworld programs. In this paper we describe a set of tools for performance tuning of parallel programs that bridges this gap between measurement and modeling. Our approach is based on lost cycles analysis, which involves measurement and modeling of all sources of overhead in a parallel program. We first describe a tool for measuring overheads in parallel programs that we have incorporated into the runtime environment for Fortran programs on the Kendall Square KSR1. We then describe a tool that ts these overhead measurements to analytic forms. We illustrate the use of these tools by analyzing the performance tradeoffs among parallel implementations of 2D FFT. These examples show how our tools enable programmers to develop accurate performance models of parallel applications without requiring extensive performance modeling expertise.
Linear and Order Statistics Combiners for Pattern Classification
 Combining Artificial Neural Nets
, 1999
"... Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification resul ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the "added" error. If N unbiased classifiers are combined by simple averaging, the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based nonlinear combiners, we derive expressions that indicate how much the median, the maximum and in general the ith order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
Nonparametric Density Estimation and Tests of Continuous Time Interest Rate Models
 Review of Financial Studies
, 1998
"... A number of recent papers have used nonparametric density estimation or nonparametric regression to study the instantaneous spot interest rate, and to test term structure models. However, little is known about the performance of these methods when applied to persistent timeseries, such as U.S. inte ..."
Abstract

Cited by 64 (2 self)
 Add to MetaCart
A number of recent papers have used nonparametric density estimation or nonparametric regression to study the instantaneous spot interest rate, and to test term structure models. However, little is known about the performance of these methods when applied to persistent timeseries, such as U.S. interest rates. This paper uses the Vasicek [1977] model to study the performance of kernel density estimates of the ergodic distribution of the instantaneous spot rate. The model's tractability allows me to analyze the MISE of the kernel estimate as a function of persistence, variance of the ergodic distribution, span of the data, sampling frequency, and kernel bandwidth. Our principle result is that persistence has an important impact on optimal bandwidth selection and on nite sample performance. We also nd that sampling the data more frequently has little e ect on estimator quality. We also examine one of AitSahalia's [1996a] new nonparametric tests of parametric continuoustime Markov models of the instantaneous spot interest rate. The test is based on the distance between parametric and nonparametric (kernel) estimates of the ergodic distribution of the interest rate process. Our principal result is that the test rejects too often when using asymptotic critical values and 22 years of data. The reason for the high rejection rate is probably because the asymptotic distribution of the test does not depend on persistence, but the nite sample performance of the estimator does. After critical values are adjusted for size, the test has low power in distinguishing between the Vasicek and CoxIngersollRoss models when compared with a conditional moment based speci cation test.
Exploiting multiuser diversity for medium access control in wireless networks
 Proc. of IEEE INFOCOM
, 2003
"... Abstract Multiuser diversity refers to a type of diversity present across different users in a fading environment. This diversity can be exploited by scheduling transmissions so that users transmit when their channel conditions are favorable. Using such an approach leads to a system capacity that ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
Abstract Multiuser diversity refers to a type of diversity present across different users in a fading environment. This diversity can be exploited by scheduling transmissions so that users transmit when their channel conditions are favorable. Using such an approach leads to a system capacity that increases with the number of users. However, such scheduling requires centralized control. In this paper, we consider a decentralized medium access control (MAC) protocol, where each user only has knowledge of its own channel gain. We consider a variation of the ALOHA protocol, channelaware ALOHA; using this protocol we show that users can still exploit multiuser diversity gains. First we consider a backlogged model, where each user always has packets to send. In this case we show that the total system throughput increases at the same rate as in a system with a centralized scheduler. Asymptotically, the fraction of throughput lost due to the random access protocol is shown to be 1=e. We also consider a splitting algorithm, where the splitting sequence depends on the users ' channel gains; this algorithm is shown to approach the throughput of an optimal centralized scheme. Next we consider a system with an infinite user population and random arrivals. In this case, it is proved that a variation of channelaware ALOHA is stable for any total arrival rate in a memoryless channel, given that users can estimate the backlog. Extensions for channels with memory are also discussed. I.