Results 1  10
of
571
Diversity and Multiplexing: A Fundamental Tradeoff in Multiple Antenna Channels
 IEEE Trans. Inform. Theory
, 2002
"... Multiple antennas can be used for increasing the amount of diversity or the number of degrees of freedom in wireless communication systems. In this paper, we propose the point of view that both types of gains can be simultaneously obtained for a given multiple antenna channel, but there is a fund ..."
Abstract

Cited by 642 (16 self)
 Add to MetaCart
Multiple antennas can be used for increasing the amount of diversity or the number of degrees of freedom in wireless communication systems. In this paper, we propose the point of view that both types of gains can be simultaneously obtained for a given multiple antenna channel, but there is a fundamental tradeo# between how much of each any coding scheme can get. For the richly scattered Rayleigh fading channel, we give a simple characterization of the optimal tradeo# curve and use it to evaluate the performance of existing multiple antenna schemes.
Clustering Gene Expression Patterns
, 1999
"... Recent advances in biotechnology allow researchers to measure expression levels for thousands of genes simultaneously, across different conditions and over time. Analysis of data produced by such experiments offers potential insight into gene function and regulatory mechanisms. A key step in the ana ..."
Abstract

Cited by 332 (11 self)
 Add to MetaCart
Recent advances in biotechnology allow researchers to measure expression levels for thousands of genes simultaneously, across different conditions and over time. Analysis of data produced by such experiments offers potential insight into gene function and regulatory mechanisms. A key step in the analysis of gene expression data is the detection of groups of genes that manifest similar expression patterns. The corresponding algorithmic problem is to cluster multicondition gene expression patterns. In this paper we describe a novel clustering algorithm that was developed for analysis of gene expression data. We define an appropriate stochastic error model on the input, and prove that under the conditions of the model, the algorithm recovers the cluster structure with high probability. The running time of the algorithm on an ngene dataset is O(n 2 (log(n)) c ). We also present a practical heuristic based on the same algorithmic ideas. The heuristic was implemented and its p...
MulticastBased Inference of NetworkInternal Characteristics: Accuracy of Packet Loss Estimation
 IEEE Transactions on Information Theory
, 1998
"... We explore the use of endtoend multicast traffic as measurement probes to infer networkinternal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique explo ..."
Abstract

Cited by 252 (31 self)
 Add to MetaCart
We explore the use of endtoend multicast traffic as measurement probes to infer networkinternal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique exploits the inherent correlation between such observations to infer the performance of paths between branch points in the multicast tree spanning the probe source and its receivers. We evaluate through analysis and simulation the accuracy of our estimator under a variety of network conditions. In particular, we report on the error between inferred loss rates and actual loss rates as we vary the network topology, propagation delay, packet drop policy, background traffic mix, and probe traffic type. In all but one case, estimated losses and probe losses agree to within 2 percent on average. We feel this accuracy is enough to reliably identify congested links in a widearea internetwork. KeywordsInternet performance, endtoend measurements, Maximum Likelihood Estimator, tomography I.
Randomized Gossip Algorithms
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join a ..."
Abstract

Cited by 208 (5 self)
 Add to MetaCart
Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of “gossip ” algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the socalled Preferential Connectivity (PC) model.
Large Deviations and Overflow Probabilities for the General SingleServer Queue, With Applications
, 1994
"... We consider from a thermodynamic viewpoint queueing systems where the workload process is assumed to have an associated large deviation principle with arbitrary scaling: there exist increasing scaling functions (a t ; v t ; t 2 R+ ) and a rate function I such that if (W t ; t 2 R+ ) denotes the wo ..."
Abstract

Cited by 180 (18 self)
 Add to MetaCart
We consider from a thermodynamic viewpoint queueing systems where the workload process is assumed to have an associated large deviation principle with arbitrary scaling: there exist increasing scaling functions (a t ; v t ; t 2 R+ ) and a rate function I such that if (W t ; t 2 R+ ) denotes the workload process, then lim t!1 v \Gamma1 t log P (W t =a t ? w) = \GammaI (w) on the continuity set of I . In the case that a t = v t = t it has been argued heuristically, and recently proved in a fairly general context (for discrete time models) by Glynn and Whitt [8], that the queuelength distribution (that is, the distribution of supremum of the workload process Q = sup t0 W t ) decays exponentially: P (Q ? b) ¸ e \Gammaffib and the decay rate ffi is directly related to the rate function I . We establish conditions for a more general result to hold, where the scaling functions are not necessarily linear in t: we find that the queuelength distribution has an exponential tail only if l...
Scheduling for Multiple Flows Sharing a TimeVarying Channel: The Exponential Rule
 American Mathematical Society Translations, Series
, 2000
"... We consider the following queueing system which arises as a model of a wireless link shared by multiple users. Multiple flows must be served by a "channel" (server). The channel capacity (service rate) changes in time randomly and asynchronously with respect to different flows. In each time slot, a ..."
Abstract

Cited by 131 (13 self)
 Add to MetaCart
We consider the following queueing system which arises as a model of a wireless link shared by multiple users. Multiple flows must be served by a "channel" (server). The channel capacity (service rate) changes in time randomly and asynchronously with respect to different flows. In each time slot, a scheduling discipline (rule) picks a flow for service based on the current state of the channel and the queues. We study a scheduling rule, which we call the exponential rule, and prove that this rule is throughputoptimal, i.e., it makes the queues stable if there exists any rule which can do so. In the proof we use the fluid limit technique, along with a separation of time scales argument. Namely, the proof of the desired property of a "conventional" fluid limit involves a study of a different fluid limit arising on a "finer" time scale. In our companion paper [12] it is demonstrated that the exponential rule can be used to provide Quality of Service guarantees over a shared wireless link.
The sample average approximation method for stochastic discrete optimization
 SIAM Journal on Optimization
, 2001
"... Abstract. In this paper we study a Monte Carlo simulation based approach to stochastic discrete optimization problems. The basic idea of such methods is that a random sample is generated and consequently the expected value function is approximated by the corresponding sample average function. The ob ..."
Abstract

Cited by 127 (16 self)
 Add to MetaCart
Abstract. In this paper we study a Monte Carlo simulation based approach to stochastic discrete optimization problems. The basic idea of such methods is that a random sample is generated and consequently the expected value function is approximated by the corresponding sample average function. The obtained sample average optimization problem is solved, and the procedure is repeated several times until a stopping criterion is satisfied. We discuss convergence rates and stopping rules of this procedure and present a numerical example of the stochastic knapsack problem. Key words. Stochastic programming, discrete optimization, Monte Carlo sampling, Law of Large Numbers, Large Deviations theory, sample average approximation, stopping rules, stochastic knapsack problem AMS subject classifications. 90C10, 90C15
Large Deviations, the Shape of the Loss Curve, and Economies of Scale in Large Multiplexers
, 1995
"... We analyse the queue Q L at a multiplexer with L inputs. We obtain a large deviation result, namely that under very general conditions lim L!1 L \Gamma1 log P[Q L ? Lb] = \GammaI (b) provided the offered load is held constant, where the shape function I is expressed in terms of the cumulant ..."
Abstract

Cited by 114 (11 self)
 Add to MetaCart
We analyse the queue Q L at a multiplexer with L inputs. We obtain a large deviation result, namely that under very general conditions lim L!1 L \Gamma1 log P[Q L ? Lb] = \GammaI (b) provided the offered load is held constant, where the shape function I is expressed in terms of the cumulant generating functions of the input traffic. This provides an improvement on the usual effective bandwidth approximation P[Q L ? b] e \Gammaffib , replacing it with P[Q L ? b] e \GammaLI(b=L) . The difference I(b) \Gamma ffi b determines the economies of scale which are to be obtained in large multiplexers. If the limit = \Gamma lim t!1 t t (ffi) exists (here t is the finite time cumulant of the workload process) then lim b!1 (I(b) \Gamma ffi b) = . We apply this idea to a number of examples of arrivals processes: heterogeneous superpositions, Gaussian processes, Markovian additive processes and Poisson processes. We obtain expressions for in these cases. is zero for independent arrivals, but positive for arrivals with positive correlations. Thus economies of scale are obtainable for highly bursty traffic expected in ATM multiplexing.