Results 1  10
of
2,623
Probability: Theory and examples
 CAMBRIDGE U PRESS
, 2011
"... Some times the lights are shining on me. Other times I can barely see. Lately it occurs to me what a long strange trip its been. Grateful Dead In 1989 when the first edition of the book was completed, my sons David and Greg were 3 and 1, and the cover picture showed the Dow Jones at 2650. The last t ..."
Abstract

Cited by 805 (10 self)
 Add to MetaCart
Some times the lights are shining on me. Other times I can barely see. Lately it occurs to me what a long strange trip its been. Grateful Dead In 1989 when the first edition of the book was completed, my sons David and Greg were 3 and 1, and the cover picture showed the Dow Jones at 2650. The last twenty years have brought many changes but the song remains the same. The title of the book indicates that as we develop the theory, we will focus our attention on examples. Hoping that the book would be a useful reference for people who apply probability in their work, we have tried to emphasize the results that are important for applications, and illustrated their use with roughly 200 examples. Probability is not a spectator sport, so the book contains almost 450 exercises to challenge the reader and to deepen their understanding. The fourth edition has two major changes (in addition to a new publisher): (i) The book has been converted from TeX to LaTeX. The systematic use of labels should eventually eliminate problems with references to other points in the text. In
The Weighted Majority Algorithm
, 1994
"... We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case that the learner has reason to believe that one of some pool of kn ..."
Abstract

Cited by 678 (39 self)
 Add to MetaCart
We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case that the learner has reason to believe that one of some pool of known algorithms will perform well, but the learner does not know which one. A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in such a circumstance. We call this method the Weighted Majority Algorithm. We show that this algorithm is robust in the presence of errors in the data. We discuss various versions of the Weighted Majority Algorithm and prove mistake bounds for them that are closely related to the mistake bounds of the best algorithms of the pool. For example, given a sequence of trials, if there is an algorithm in the pool A that makes at most m mistakes then the Weighted Majority Algorithm will make at most c(log jAj + m) mi...
Design of capacityapproaching irregular lowdensity paritycheck codes
 IEEE TRANS. INFORM. THEORY
, 2001
"... We design lowdensity paritycheck (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the unde ..."
Abstract

Cited by 434 (7 self)
 Add to MetaCart
We design lowdensity paritycheck (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a beliefpropagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.
Capacity of Fading Channels with Channel Side Information
, 1997
"... We obtain the Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone. The optimal power adaptation in the former case is "waterpouring" in time, analogous to waterpouring in frequency for timeinvariant frequencyselective fadi ..."
Abstract

Cited by 397 (23 self)
 Add to MetaCart
We obtain the Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone. The optimal power adaptation in the former case is "waterpouring" in time, analogous to waterpouring in frequency for timeinvariant frequencyselective fading channels. Inverting the channel results in a large capacity penalty in severe fading.
Modeling and performance analysis of bittorrentlike peertopeer networks
 In SIGCOMM
, 2004
"... In this paper, we develop simple models to study the performance of BitTorrent, a second generation peertopeer (P2P) application. We first present a simple fluid model and study the scalability, performance and efficiency of such a filesharing mechanism. We then consider the builtin incentive mec ..."
Abstract

Cited by 397 (2 self)
 Add to MetaCart
In this paper, we develop simple models to study the performance of BitTorrent, a second generation peertopeer (P2P) application. We first present a simple fluid model and study the scalability, performance and efficiency of such a filesharing mechanism. We then consider the builtin incentive mechanism of BitTorrent and study its effect on network performance. We also provide numerical results based on both simulations and real traces obtained from the Internet. 1
Online Aggregation
, 1997
"... Aggregation in traditional database systems is performed in batch mode: a query is submitted, the system processes a large volume of data over a long period of time, and, eventually, the final answer is returned. This archaic approach is frustrating to users and has been abandoned in most other area ..."
Abstract

Cited by 311 (44 self)
 Add to MetaCart
Aggregation in traditional database systems is performed in batch mode: a query is submitted, the system processes a large volume of data over a long period of time, and, eventually, the final answer is returned. This archaic approach is frustrating to users and has been abandoned in most other areas of computing. In this paper we propose a new online aggregation interface that permits users to both observe the progress of their aggregation queries and control execution on the fly. After outlining usability and performance requirements for a system supporting online aggregation, we present a suite of techniques that extend a database system to meet these requirements. These include methods for returning the output in random order, for providing control over the relative rate at which different aggregates are computed, and for computing running confidence intervals. Finally, we report on an initial implementation of online aggregation in postgres. 1 Introduction Aggregation is an incre...
Synchronization and linearity: an algebra for discrete event systems
, 2001
"... The first edition of this book was published in 1992 by Wiley (ISBN 0 471 93609 X). Since this book is now out of print, and to answer the request of several colleagues, the authors have decided to make it available freely on the Web, while retaining the copyright, for the benefit of the scientific ..."
Abstract

Cited by 250 (10 self)
 Add to MetaCart
The first edition of this book was published in 1992 by Wiley (ISBN 0 471 93609 X). Since this book is now out of print, and to answer the request of several colleagues, the authors have decided to make it available freely on the Web, while retaining the copyright, for the benefit of the scientific community. Copyright Statement This electronic document is in PDF format. One needs Acrobat Reader (available freely for most platforms from the Adobe web site) to benefit from the full interactive machinery: using the package hyperref by Sebastian Rahtz, the table of contents and all LATEX crossreferences are automatically converted into clickable hyperlinks, bookmarks are generated automatically, etc.. So, do not hesitate to click on references to equation or section numbers, on items of thetableofcontents and of the index, etc.. One may freely use and print this document for one’s own purpose or even distribute it freely, but not commercially, provided it is distributed in its entirety and without modifications, including this preface and copyright statement. Any use of thecontents should be acknowledged according to the standard scientific practice. The
Testing for Common Trends
 Journal of the American Statistical Association
, 1988
"... Cointegrated multiple time series share at least one common trend. Two tests are developed for the number of common stochastic trends (i.e., for the order of cointegration) in a multiple time series with and without drift. Both tests involve the roots of the ordinary least squares coefficient matrix ..."
Abstract

Cited by 208 (5 self)
 Add to MetaCart
Cointegrated multiple time series share at least one common trend. Two tests are developed for the number of common stochastic trends (i.e., for the order of cointegration) in a multiple time series with and without drift. Both tests involve the roots of the ordinary least squares coefficient matrix obtained by regressing the series onto its first lag. Critical values for the tests are tabulated, and their power is examined in a Monte Carlo study. Economic time series are often modeled as having a unit root in their autoregressive representation, or (equivalently) as containing a stochastic trend. But both casual observation and economic theory suggesthat many series might contain the same stochastic trendso that they are cointegrated. If each of n series is integrated of order 1 but can be jointly characterized by k < n stochastic trends, then the vecto representation of these series has k unit roots and n k distinct stationary linear combinations. Our proposed tests can be viewed alternatively as tests of the number of common trends, linearly independent cointegrating vectors, or autoregressive unit roots of the vector process. Both of the proposed tests are asymptotically similar. The firstest (qf) is developed under the assumption that certain components of the process have a finiteorder vector autoregressive (VAR) representation, and the nuisance parameters are handled by estimating this VAR. The second test (q,) entails computing the eigenvalues of a corrected sample firstorder autocorrelation matrix, where the correction is essentially a sum of the autocovariance matrices. Previous researchers have found that U.S. postwar interest rates, taken individually, appear to be integrated of order 1. In addition, the theory of the term structure implies that yields on similar assets of different maturities will be cointegrated. Applying these tests to postwar U.S. data on the federal funds rate and the three and twelvemonth treasury bill rates providesupport for this prediction: The three interest rates appear to be cointegrated.
Proof of a Fundamental Result in SelfSimilar Traffic Modeling
 COMPUTER COMMUNICATION REVIEW
, 1997
"... We state and prove the following key mathematical result in selfsimilar traffic modeling: the superposition of many ON/OFF sources (also known as packet trains) with strictly alternating ON and OFFperiods and whose ONperiods or OFFperiods exhibit the Noah Effect (i.e., have high variability or ..."
Abstract

Cited by 206 (8 self)
 Add to MetaCart
We state and prove the following key mathematical result in selfsimilar traffic modeling: the superposition of many ON/OFF sources (also known as packet trains) with strictly alternating ON and OFFperiods and whose ONperiods or OFFperiods exhibit the Noah Effect (i.e., have high variability or infinite variance) can produce aggregate network traffic that exhibits the Joseph Effect (i.e., is selfsimilar or longrange dependent). There is, moreover, a simple relation between the parameters describing the intensities of the Noah Effect (high variability) and the Joseph Effect (selfsimilarity). This provides a simple physical explanation for the presence of selfsimilar traffic patterns in modern highspeed network traffic that is consistent with traffic measurements at the source level. We illustrate how this mathematical result can be combined with modern highperformance computing capabilities to yield a simple and efficient lineartime algorithm for generating selfsimilar traf...