Results 1 - 10
of
759
Models and issues in data stream systems
- IN PODS
, 2002
"... In this overview paper we motivate the need for and research issues arising from a new model of data processing. In this model, data does not take the form of persistent relations, but rather arrives in multiple, continuous, rapid, time-varying data streams. In addition to reviewing past work releva ..."
Abstract
-
Cited by 786 (19 self)
- Add to MetaCart
(Show Context)
In this overview paper we motivate the need for and research issues arising from a new model of data processing. In this model, data does not take the form of persistent relations, but rather arrives in multiple, continuous, rapid, time-varying data streams. In addition to reviewing past work relevant to data stream systems and current projects in the area, the paper explores topics in stream query languages, new requirements and challenges in query processing, and algorithmic issues.
Security and Composition of Multi-party Cryptographic Protocols
- JOURNAL OF CRYPTOLOGY
, 1998
"... We present general definitions of security for multi-party cryptographic protocols, with focus on the task of evaluating a probabilistic function of the parties' inputs. We show that, with respect to these definitions, security is preserved under a natural composition operation. The definiti ..."
Abstract
-
Cited by 463 (19 self)
- Add to MetaCart
We present general definitions of security for multi-party cryptographic protocols, with focus on the task of evaluating a probabilistic function of the parties' inputs. We show that, with respect to these definitions, security is preserved under a natural composition operation. The definitions follow the general paradigm of known definitions; yet some substantial modifications and simplifications are introduced. The composition operation is the natural `subroutine substitution' operation, formalized by Micali and Rogaway. We consider several standard settings for multi-party protocols, including the cases of eavesdropping, Byzantine, non-adaptive and adaptive adversaries, as well as the information-theoretic and the computational models. In particular, in the computational model we provide the first definition of security of protocols that is shown to be preserved under composition.
A Parallel Repetition Theorem
- SIAM Journal on Computing
, 1998
"... We show that a parallel repetition of any two-prover one-round proof system (MIP(2, 1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The constant in the exponent (in our analysis) depends only on the original probability of error and on the t ..."
Abstract
-
Cited by 362 (9 self)
- Add to MetaCart
(Show Context)
We show that a parallel repetition of any two-prover one-round proof system (MIP(2, 1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The constant in the exponent (in our analysis) depends only on the original probability of error and on the total number of possible answers of the two provers. The dependency on the total number of possible answers is logarithmic, which was recently proved to be almost the best possible [U. Feige and O. Verbitsky, Proc. 11th Annual IEEE Conference on Computational Complexity, IEEE Computer Society Press, Los Alamitos, CA, 1996, pp. 70--76].
Sharing the Cost of Multicast Transmissions
, 2001
"... We investigate cost-sharing algorithms for multicast transmission. Economic considerations point to two distinct mechanisms, marginal cost and Shapley value, as the two solutions most appropriate in this context. We prove that the former has a natural algorithm that uses only two messages per link o ..."
Abstract
-
Cited by 284 (16 self)
- Add to MetaCart
We investigate cost-sharing algorithms for multicast transmission. Economic considerations point to two distinct mechanisms, marginal cost and Shapley value, as the two solutions most appropriate in this context. We prove that the former has a natural algorithm that uses only two messages per link of the multicast tree, while we give evidence that the latter requires a quadratic total number of messages. We also show that the welfare value achieved by an optimal multicast tree is NP-hard to approximate within any constant factor, even for bounded-degree networks. The lower-bound proof for the Shapley value uses a novel algebraic technique for bounding from below the number of messages exchanged in a distributed computation; this technique may prove useful in other contexts as well.
Small-Bias Probability Spaces: Efficient Constructions and Applications
- SIAM J. Comput
, 1993
"... We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called ffl-biased random variables. The number of random bits needed to generate the random var ..."
Abstract
-
Cited by 276 (13 self)
- Add to MetaCart
(Show Context)
We show how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with "almost" equal probability. They are called ffl-biased random variables. The number of random bits needed to generate the random variables is O(log n + log 1 ffl ). Thus, if ffl is polynomially small, then the size of the sample space is also polynomial. Random variables that are ffl-biased can be used to construct "almost" k-wise independent random variables where ffl is a function of k. These probability spaces have various applications: 1. Derandomization of algorithms: many randomized algorithms that require only k- wise independence of their random bits (where k is bounded by O(log n)), can be derandomized by using ffl-biased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive testing of combinatorial circui...
An Information Statistics Approach to Data Stream and Communication Complexity
, 2003
"... We present a new method for proving strong lower bounds in communication complexity. ..."
Abstract
-
Cited by 228 (8 self)
- Add to MetaCart
We present a new method for proving strong lower bounds in communication complexity.
Efficient Search for Approximate Nearest Neighbor in High Dimensional Spaces
, 1998
"... We address the problem of designing data structures that allow efficient search for approximate nearest neighbors. More specifically, given a database consisting of a set of vectors in some high dimensional Euclidean space, we want to construct a space-efficient data structure that would allow us to ..."
Abstract
-
Cited by 215 (9 self)
- Add to MetaCart
(Show Context)
We address the problem of designing data structures that allow efficient search for approximate nearest neighbors. More specifically, given a database consisting of a set of vectors in some high dimensional Euclidean space, we want to construct a space-efficient data structure that would allow us to search, given a query vector, for the closest or nearly closest vector in the database. We also address this problem when distances are measured by the L 1 norm, and in the Hamming cube. Significantly improving and extending recent results of Kleinberg, we construct data structures whose size is polynomial in the size of the database, and search algorithms that run in time nearly linear or nearly quadratic in the dimension (depending on the case; the extra factors are polylogarithmic in the size of the database). Computer Science Department, Technion --- IIT, Haifa 32000, Israel. Email: eyalk@cs.technion.ac.il y Bell Communications Research, MCC-1C365B, 445 South Street, Morristown, NJ ...
Quantum vs. classical communication and computation
- Proc. 30th Ann. ACM Symp. on Theory of Computing (STOC ’98
, 1998
"... We present a simple and general simulation technique that transforms any black-box quantum algorithm (à la Grover’s database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism. This allows us to obtain new positive and ne ..."
Abstract
-
Cited by 158 (14 self)
- Add to MetaCart
(Show Context)
We present a simple and general simulation technique that transforms any black-box quantum algorithm (à la Grover’s database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism. This allows us to obtain new positive and negative results. The positive results are novel quantum communication protocols that are built from nontrivial quantum algorithms via this simulation. These protocols, combined with (old and new) classical lower bounds, are shown to provide the first asymptotic separation results between the quantum and classical (probabilistic) twoparty communication complexity models. In particular, we obtain a quadratic separation for the bounded-error model, and an exponential separation for the zero-error model. The negative results transform known quantum communication lower bounds to computational lower bounds in the black-box model. In particular, we show that the quadratic speed-up achieved by Grover for the OR function is impossible for the PARITY function or the MAJORITY function in the bounded-error model, nor is it possible for the OR function itself in the exact case. This dichotomy naturally suggests a study of bounded-depth predicates (i.e. those in the polynomial hierarchy) between OR and MAJORITY. We present black-box algorithms that achieve near quadratic speed up for all such predicates.
Computing and Communicating Functions over Sensor Networks
, 2004
"... In wireless sensor networks, one is not interested in downloading all the data from all the sensors. Rather, one is only interested in collecting from a sink node a relevant function of the sensor measurements. This paper studies the maximum rate at which functions of sensor measurements can be comp ..."
Abstract
-
Cited by 156 (13 self)
- Add to MetaCart
(Show Context)
In wireless sensor networks, one is not interested in downloading all the data from all the sensors. Rather, one is only interested in collecting from a sink node a relevant function of the sensor measurements. This paper studies the maximum rate at which functions of sensor measurements can be computed and communicated to the sink node. It focuses
Reductions in Streaming Algorithms, with an Application to Counting Triangles in Graphs
"... We introduce reductions in the streaming model as a tool in the design of streaming algorithms. We develop ..."
Abstract
-
Cited by 149 (5 self)
- Add to MetaCart
We introduce reductions in the streaming model as a tool in the design of streaming algorithms. We develop