Results 21  30
of
847
Building a Better NetFlow
, 2004
"... Network operators need to determine the composition of the traffic mix on links when looking for dominant applications, users, or estimating traffic matrices. Cisco's NetFlow has evolved into a solution that satisfies this need by reporting flow records that summarize a sample of the traffic tr ..."
Abstract

Cited by 166 (5 self)
 Add to MetaCart
Network operators need to determine the composition of the traffic mix on links when looking for dominant applications, users, or estimating traffic matrices. Cisco's NetFlow has evolved into a solution that satisfies this need by reporting flow records that summarize a sample of the traffic traversing the link. But sampled NetFlow has shortcomings that hinder the collection and analysis of traffic data. First, during flooding attacks router memory and network bandwidth consumed by flow records can increase beyond what is available; second, selecting the right static sampling rate is difficult because no single rate gives the right tradeoff of memory use versus accuracy for all traffic mixes; third, the heuristics routers use to decide when a flow is reported are a poor match to most applications that work with time bins; finally, it is impossible to estimate without bias the number of active flows for aggregates with nonTCP traffic. In thi paper we propose...
SecretKey Reconciliation by Public Discussion
, 1994
"... . Assuming that Alice and Bob use a secret noisy channel (modelled by a binary symmetric channel) to send a key, reconciliation is the process of correcting errors between Alice's and Bob's version of the key. This is done by public discussion, which leaks some information about the secret ..."
Abstract

Cited by 161 (3 self)
 Add to MetaCart
(Show Context)
. Assuming that Alice and Bob use a secret noisy channel (modelled by a binary symmetric channel) to send a key, reconciliation is the process of correcting errors between Alice's and Bob's version of the key. This is done by public discussion, which leaks some information about the secret key to an eavesdropper. We show how to construct protocols that leak a minimum amount of information. However this construction cannot be implemented efficiently. If Alice and Bob are willing to reveal an arbitrarily small amount of additional information (beyond the minimum) then they can implement polynomialtime protocols. We also present a more efficient protocol, which leaks an amount of information acceptably close to the minimum possible for sufficiently reliable secret channels (those with probability of any symbol being transmitted incorrectly as large as 15%). This work improves on earlier reconciliation approaches [R, BBR, BBBSS]. 1 Introduction Unlike public key cryptosystems, the securi...
Sketchbased Change Detection: Methods, Evaluation, and Applications
 IN INTERNET MEASUREMENT CONFERENCE
, 2003
"... Traffic anomalies such as failures and attacks are commonplace in today's network, and identifying them rapidly and accurately is critical for large network operators. The detection typically treats the traffic as a collection of flows that need to be examined for significant changes in traffic ..."
Abstract

Cited by 161 (17 self)
 Add to MetaCart
Traffic anomalies such as failures and attacks are commonplace in today's network, and identifying them rapidly and accurately is critical for large network operators. The detection typically treats the traffic as a collection of flows that need to be examined for significant changes in traffic pattern (e.g., volume, number of connections) . However, as link speeds and the number of flows increase, keeping perflow state is either too expensive or too slow. We propose building compact summaries of the traffic data using the notion of sketches. We have designed a variant of the sketch data structure, kary sketch, which uses a constant, small amount of memory, and has constant perrecord update and reconstruction cost. Its linearity property enables us to summarize traffic at various levels. We then implement a variety of time series forecast models (ARIMA, HoltWinters, etc.) on top of such summaries and detect significant changes by looking for flows with large forecast errors. We also present heuristics for automatically configuring the model parameters. Using a
A complexity theoretic approach to randomness, in
 Proceedings of the 15th Annual ACM Symposium on Theory of Computing
, 1983
"... Abstract: We study a time bounded variant of Kolmogorov complexity. This motion, together with universal hashing, can be used to show that problems solvable probabilistically in polynomial time are all within the second level of the polynomial time hierarchy. We also discuss applications to the the ..."
Abstract

Cited by 157 (1 self)
 Add to MetaCart
(Show Context)
Abstract: We study a time bounded variant of Kolmogorov complexity. This motion, together with universal hashing, can be used to show that problems solvable probabilistically in polynomial time are all within the second level of the polynomial time hierarchy. We also discuss applications to the theory of probabilistic constructions. I.
Wireless informationtheoretic security  part I: Theoretical aspects
 IEEE Trans. on Information Theory
, 2006
"... In this twopart paper, we consider the transmission of confidential data over wireless wiretap channels. The first part presents an informationtheoretic problem formulation in which two legitimate partners communicate over a quasistatic fading channel and an eavesdropper observes their transmissi ..."
Abstract

Cited by 155 (12 self)
 Add to MetaCart
(Show Context)
In this twopart paper, we consider the transmission of confidential data over wireless wiretap channels. The first part presents an informationtheoretic problem formulation in which two legitimate partners communicate over a quasistatic fading channel and an eavesdropper observes their transmissions through another independent quasistatic fading channel. We define the secrecy capacity in terms of outage probability and provide a complete characterization of the maximum transmission rate at which the eavesdropper is unable to decode any information. In sharp contrast with known results for Gaussian wiretap channels (without feedback), our contribution shows that in the presence of fading informationtheoretic security is achievable even when the eavesdropper has a better average signaltonoise ratio (SNR) than the legitimate receiver — fading thus turns out to be a friend and not a foe. The issue of imperfect channel state information is also addressed. Practical schemes for wireless informationtheoretic security are presented in Part II, which in some cases comes close to the secrecy capacity limits given in this paper.
On Hiding Information from an Oracle
, 1989
"... : We consider the problem of computing with encrypted data. Player A wishes to know the value f(x) for some x but lacks the power to compute it. Player B has the power to compute f and is willing to send f(y) to A if she sends him y, for any y. Informally, an encryption scheme for the problem f is a ..."
Abstract

Cited by 153 (15 self)
 Add to MetaCart
: We consider the problem of computing with encrypted data. Player A wishes to know the value f(x) for some x but lacks the power to compute it. Player B has the power to compute f and is willing to send f(y) to A if she sends him y, for any y. Informally, an encryption scheme for the problem f is a method by which A, using her inferior resources, can transform the cleartext instance x into an encrypted instance y, obtain f(y) from B, and infer f(x) from f(y) in such a way that B cannot infer x from y. When such an encryption scheme exists, we say that f is encryptable. The framework defined in this paper enables us to prove precise statements about what an encrypted instance hides and what it leaks, in an informationtheoretic sense. Our definitions are cast in the language of probability theory and do not involve assumptions such as the intractability of factoring or the existence of oneway functions. We use our framework to describe encryption schemes for some wellknown function...
Efficient generation of shared RSA keys
 Advances in Cryptology  CRYPTO 97
, 1997
"... We describe efficient techniques for a number of parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the factorization of N. In addition a public encryption exponent is publicly known and each party holds a share of the ..."
Abstract

Cited by 151 (5 self)
 Add to MetaCart
We describe efficient techniques for a number of parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the factorization of N. In addition a public encryption exponent is publicly known and each party holds a share of the private exponent that enables threshold decryption. Our protocols are efficient in computation and communication. All results are presented in the honest but curious settings (passive adversary).
Reductions in Streaming Algorithms, with an Application to Counting Triangles in Graphs
"... We introduce reductions in the streaming model as a tool in the design of streaming algorithms. We develop ..."
Abstract

Cited by 151 (5 self)
 Add to MetaCart
We introduce reductions in the streaming model as a tool in the design of streaming algorithms. We develop
Denial of Service via Algorithmic Complexity Attacks
, 2003
"... We present a new class of lowbandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have "averagecase" expected running time that's far more efficient than the worst case. For examp ..."
Abstract

Cited by 142 (2 self)
 Add to MetaCart
We present a new class of lowbandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have "averagecase" expected running time that's far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks.
Dynamic Perfect Hashing: Upper and Lower Bounds
, 1990
"... The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worstcase time for lookups and ..."
Abstract

Cited by 142 (14 self)
 Add to MetaCart
The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worstcase time for lookups and O(1) amortized expected time for insertions and deletions; it uses space proportional to the size of the set stored. Furthermore, lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved. This class encompasses realistic hashingbased schemes that use linear space. Such algorithms have amortized worstcase time complexity \Omega(log n) for a sequence of n insertions and