Results 1  10
of
1,597
RTP: A Transport Protocol for RealTime Applications
"... Status of this Memo This document is an Internet Draft. Internet Drafts are working documents ..."
Abstract

Cited by 1899 (126 self)
 Add to MetaCart
Status of this Memo This document is an Internet Draft. Internet Drafts are working documents
Books in graphs
, 2008
"... A set of q triangles sharing a common edge is called a book of size q. We write β (n, m) for the the maximal q such that every graph G (n, m) contains a book of size q. In this note 1) we compute β ( n, cn 2) for infinitely many values of c with 1/4 < c < 1/3, 2) we show that if m ≥ (1/4 − α) n 2 wi ..."
Abstract

Cited by 1802 (19 self)
 Add to MetaCart
A set of q triangles sharing a common edge is called a book of size q. We write β (n, m) for the the maximal q such that every graph G (n, m) contains a book of size q. In this note 1) we compute β ( n, cn 2) for infinitely many values of c with 1/4 < c < 1/3, 2) we show that if m ≥ (1/4 − α) n 2 with 0 < α < 17 −3 (), and G has no book of size at least graph G1 of order at least
Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality
, 1998
"... The nearest neighbor problem is the following: Given a set of n points P = fp 1 ; : : : ; png in some metric space X, preprocess P so as to efficiently answer queries which require finding the point in P closest to a query point q 2 X. We focus on the particularly interesting case of the ddimens ..."
Abstract

Cited by 715 (33 self)
 Add to MetaCart
The nearest neighbor problem is the following: Given a set of n points P = fp 1 ; : : : ; png in some metric space X, preprocess P so as to efficiently answer queries which require finding the point in P closest to a query point q 2 X. We focus on the particularly interesting case of the ddimensional Euclidean space where X = ! d under some l p norm. Despite decades of effort, the current solutions are far from satisfactory; in fact, for large d, in theory or in practice, they provide little improvement over the bruteforce algorithm which compares the query point to each data point. Of late, there has been some interest in the approximate nearest neighbors problem, which is: Find a point p 2 P that is an fflapproximate nearest neighbor of the query q in that for all p 0 2 P , d(p; q) (1 + ffl)d(p 0 ; q). We present two algorithmic results for the approximate version that significantly improve the known bounds: (a) preprocessing cost polynomial in n and d, and a trul...
Practical network support for IP traceback
, 2000
"... This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denialofservice attacks and by the difficulty in tracing packets with incorrect, or “spoofed”, source ad ..."
Abstract

Cited by 530 (12 self)
 Add to MetaCart
This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denialofservice attacks and by the difficulty in tracing packets with incorrect, or “spoofed”, source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed “postmortem ” – after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology. 1.
A theory of memory retrieval
 Psychol. Rev
, 1978
"... A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes the search set on the basis of probememory item relatedness, just as a ringing tuning fork evokes sympath ..."
Abstract

Cited by 380 (73 self)
 Add to MetaCart
A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes the search set on the basis of probememory item relatedness, just as a ringing tuning fork evokes sympathetic vibrations in other tuning forks. Evidence is accumulated in parallel from each probememory item comparison, and each comparison is modeled by a continuous random walk process. In item recognition, the decision process is selfterminating on matching comparisons and exhaustive on nonmatching comparisons. The mathematical model produces predictions about accuracy, mean reaction time, error latency, and reaction time distributions that are in good accord with experimental data. The theory is applied to four item recognition paradigms (Sternberg, prememorized list, studytest, and continuous) and to speedaccuracy paradigms; results are found to provide a basis for comparison of these paradigms. It is noted that neural network models can be interfaced to the retrieval theory with little difficulty and that semantic memory models may benefit from such a retrieval scheme. At the present time, one of the major deficiencies in cognitive psychology is the lack of explicit theories that encompass more than a single experimental paradigm. The lack of such theories and some of the unfortunate consequences have been discussed recently by
The synchronization of periodic routing messages
 IEEE/ACM Transactions on Networking
, 1994
"... Abstract — The paper considers a network with many apparentlyindependent periodic processes and discusses one method by which these processes can inadvertent Iy become synchronized. In particular, we study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadve ..."
Abstract

Cited by 264 (10 self)
 Add to MetaCart
Abstract — The paper considers a network with many apparentlyindependent periodic processes and discusses one method by which these processes can inadvertent Iy become synchronized. In particular, we study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadvertent synchronization. Using simulations and analysis, we study the process of synchronization and show that the transition from unsynchronized to synchronized traffic is not one of gradual degradation but is instead a very abrupt ‘phase transition’: in general, the addition of a single router will convert a completely unsynchronized traffic stream into a completely synchronized one. We show that synchronization can be avoided by the addition of randomization to the tra~c sources and quantify how much randomization is necessary. In addition, we argue that the inadvertent synchronization of periodic processes is likely to become an increasing problem in computer networks.
SmallBias Probability Spaces: Efficient Constructions and Applications
 SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is ..."
Abstract

Cited by 258 (15 self)
 Add to MetaCart
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called fflbiased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are fflbiased can be used to construct "almost" kwise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using fflbiased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
Stochastic Models for the Web Graph
, 2000
"... The web may be viewed as a directed graph each of whose vertices is a static HTML web page, and each of whose edges corresponds to a hyperlink from one web page to another. In this paper we propose and analyze random graph models inspired by a series of empirical observations on the web. Our graph m ..."
Abstract

Cited by 217 (10 self)
 Add to MetaCart
The web may be viewed as a directed graph each of whose vertices is a static HTML web page, and each of whose edges corresponds to a hyperlink from one web page to another. In this paper we propose and analyze random graph models inspired by a series of empirical observations on the web. Our graph models differ from the traditional Gn;p models in two ways: 1. Independently chosen edges do not result in the statistics (degree distributions, clique multitudes) observed on the web. Thus, edges in our model are statistically dependent on each other. 2. Our model introduces new vertices in the graph as time evolves. This captures the fact that the web is changing with time. Our results are two fold: we show that graphs generated using our model exhibit the statistics observed on the web graph, and additionally, that natural graph models proposed earlier do not exhibit them. This remains true even when these earlier models are generalized to account for the arrival of vertices over time. In particular, the sparse random graphs in our models exhibit properties that do not arise in far denser random graphs generated by ErdosR'enyi models.
The Gambler's Ruin Problem, Genetic Algorithms, and the Sizing of Populations
, 1997
"... This paper presents a model for predicting the convergence quality of genetic algorithms. The model incorporates previous knowledge about decision making in genetic algorithms and the initial supply of building blocks in a novel way. The result is an equation that accurately predicts the quality of ..."
Abstract

Cited by 210 (88 self)
 Add to MetaCart
This paper presents a model for predicting the convergence quality of genetic algorithms. The model incorporates previous knowledge about decision making in genetic algorithms and the initial supply of building blocks in a novel way. The result is an equation that accurately predicts the quality of the solution found by a GA using a given population size. Adjustments for different selection intensities are considered and computational experiments demonstrate the effectiveness of the model. I. Introduction The size of the population in a genetic algorithm (GA) is a major factor in determining the quality of convergence. The question of how to choose an adequate population size for a particular domain is difficult and has puzzled GA practitioners for a long time. Hard questions are better approached using a divideandconquer strategy and the population sizing issue is no exception. In this case, we can identify two factors that influence convergence quality: the initial supply of build...