Results 11  20
of
80
An improved LPN algorithm
 In Roberto De Prisco and Moti Yung, editors, SCN, volume 4116 of Lecture
"... Abstract. HB + is a sharedkey authentication protocol, proposed by Juels and Weis at Crypto 2005, using prior work of Hopper and Blum. Its very low computational cost makes it attractive for lowcost devices such as radiofrequency identification(RFID) tags. Juels and Weis gave a security proof, re ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. HB + is a sharedkey authentication protocol, proposed by Juels and Weis at Crypto 2005, using prior work of Hopper and Blum. Its very low computational cost makes it attractive for lowcost devices such as radiofrequency identification(RFID) tags. Juels and Weis gave a security proof, relying on the hardness of the “learning parity with noise” (LPN) problem. Here, we improve the previous best known algorithm proposed by Blum, Kalai, and Wasserman for solving the LPN problem. This new algorithm yields an attack for HB + in the detectionbased model with work factor 2 52. 1
Robust gossiping with an application to consensus
 Journal of Computer and System Sciences
"... We study deterministic gossiping in synchronous systems with dynamic crash failures. Each processor is initialized with an input value called rumor. In the standard gossip problem, the goal of every processor is to learn all the rumors. When processors may crash, then this goal needs to be revised, ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
We study deterministic gossiping in synchronous systems with dynamic crash failures. Each processor is initialized with an input value called rumor. In the standard gossip problem, the goal of every processor is to learn all the rumors. When processors may crash, then this goal needs to be revised, since it is possible, at a point in an execution, that certain rumors are known only to processors that have already crashed. We define gossiping to be completed, for a system with crashes, when every processor knows either the rumor of processor v or that v has already crashed, for any processor v. We design gossiping algorithms that are efficient with respect to both time and communication. Let t < n be the number of failures, where n is the number of processors. If n − t = Ω(n/polylog n), then one of our algorithms completes gossiping in O(log 2 t) time and with O(n polylog n) messages. We develop an algorithm that performs gossiping with O(n 1.77) messages and in O(log 2 n) time, in any execution in which at least one processor remains nonfaulty. We show a tradeoff between time and communication in gossiping algorithms: if the number of messages is at most O(n polylog n), then the time has to be at least Ω ( log n. By way of application, we show that if n − t = Ω(n), then log(n log n)−log t consensus can be solved in O(t) time and with O(n log 2 t) messages.
Efficient FullySimulatable Oblivious Transfer
 the Journal of Cryptology
, 2007
"... Oblivious transfer, first introduced by Rabin, is one of the basic building blocks of cryptographic protocols. In an oblivious transfer (or more exactly, in its 1outof2 variant), one party known as the sender has a pair of messages and the other party known as the receiver obtains one of them. So ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Oblivious transfer, first introduced by Rabin, is one of the basic building blocks of cryptographic protocols. In an oblivious transfer (or more exactly, in its 1outof2 variant), one party known as the sender has a pair of messages and the other party known as the receiver obtains one of them. Somewhat paradoxically, the receiver obtains exactly one of the messages (and learns nothing of the other), and the sender does not know which of the messages the receiver obtained. Due to its importance as a building block for secure protocols, the efficiency of oblivious transfer protocols has been extensively studied. However, to date, there are almost no known oblivious transfer protocols that are secure in the presence of malicious adversaries under the real/ideal model simulation paradigm (without using general zeroknowledge proofs). Thus, efficient protocols that reach this level of security are of great interest. In this paper we present efficient oblivious transfer protocols that are secure according to the ideal/real model simulation paradigm. We achieve constructions under the DDH, Nth residuosity and quadratic residuosity assumptions, as well as under the assumption that homomorphic encryption exists. 1
Group Testing with Probabilistic Tests: Theory, Design and Application
"... Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a class ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of defective items and the partially unknown, probabilistic testing procedure. In this work, motivated by the application of detecting infected people in viral epidemics, we design nonadaptive sampling procedures that allow successful identification of the defective items through a set of probabilistic tests. Our design requires only a small number of tests to single out the defective items.
Exploring the design space of social networkbased Sybil defense
 In Proceedings of the 4th International Conference on Communication Systems and Network (COMSNETS’12
, 2012
"... Abstract—Recently, there has been significant research interest in leveraging social networks to defend against Sybil attacks. While much of this work may appear similar at first glance, existing social networkbased Sybil defense schemes can be divided into two categories: Sybil detection and Sybil ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Abstract—Recently, there has been significant research interest in leveraging social networks to defend against Sybil attacks. While much of this work may appear similar at first glance, existing social networkbased Sybil defense schemes can be divided into two categories: Sybil detection and Sybil tolerance. These two categories of systems both leverage global properties of the underlying social graph, but they rely on different assumptions and provide different guarantees: Sybil detection schemes are applicationindependent and rely only on the graph structure to identify Sybil identities, while Sybil tolerance schemes rely on applicationspecific information and leverage the graph structure and transaction history to bound the leverage an attacker can gain from using multiple identities. In this paper, we take a closer look at the design goals, models, assumptions, guarantees, and limitations of both categories of social networkbased Sybil defense systems. I.
Sybildefender: Defend against sybil attacks in large social networks
 In IEEE INFOCOM
, 2012
"... Abstract—Distributed systems without trusted identities are particularly vulnerable to sybil attacks, where an adversary creates multiple bogus identities to compromise the running of the system. This paper presents SybilDefender, a sybil defense mechanism that leverages the network topologies to de ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Abstract—Distributed systems without trusted identities are particularly vulnerable to sybil attacks, where an adversary creates multiple bogus identities to compromise the running of the system. This paper presents SybilDefender, a sybil defense mechanism that leverages the network topologies to defend against sybil attacks in social networks. Based on performing a limited number of random walks within the social graphs, SybilDefender is efficient and scalable to large social networks. Our experiments on two 3,000,000 node realworld social topologies show that SybilDefender outperforms the state of the art by one to two orders of magnitude in both accuracy and running time. SybilDefender can effectively identify the sybil nodes and detect the sybil community around a sybil node, even when the number of sybil nodes introduced by each attack edge is close to the theoretically detectable lower bound. Besides, we propose two approaches to limiting the number of attack edges in online social networks. The survey results of our Facebook application show that the assumption made by previous work that all the relationships in social networks are trusted does not apply to online social networks, and it is feasible to limit the number of attack edges in online social networks by relationship rating. I.
Towards Reliable Broadcasting using ACKs
"... Abstract — We propose a mechanism for reliable broadcasting in wireless networks, that consists of two components: a method for bandwidth efficient acknowledgment collection, and a coding scheme that uses acknowledgments. Our approach combines ideas from network coding and distributed space time cod ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract — We propose a mechanism for reliable broadcasting in wireless networks, that consists of two components: a method for bandwidth efficient acknowledgment collection, and a coding scheme that uses acknowledgments. Our approach combines ideas from network coding and distributed space time coding. I.
Canal: Scaling Social NetworkBased Sybil Tolerance Schemes
"... There has been a flurry of research on leveraging social networks to defend against multiple identity, or Sybil, attacks. A series of recent works does not try to explicitly identify Sybil identities and, instead, bounds the impact that Sybil identities can have. We call these approaches Sybil toler ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
There has been a flurry of research on leveraging social networks to defend against multiple identity, or Sybil, attacks. A series of recent works does not try to explicitly identify Sybil identities and, instead, bounds the impact that Sybil identities can have. We call these approaches Sybil tolerance; they have shown to be effective in applications including reputation systems, spam protection, online auctions, and content rating systems. All of these approaches use a social network as a credit network, rendering multiple identities ineffective to an attacker without a commensurate increase in social links to honest users (which are assumed to be hard to obtain). Unfortunately, a hurdle to practical adoption is that Sybil tolerance relies on computationally expensive network analysis, thereby limiting widespread deployment.
GraphConstrained Group Testing
, 2010
"... Nonadaptive group testing involves grouping arbitrary subsets of n items into different pools. Each pool is then tested and defective items are identified. A fundamental question involves minimizing the number of pools required to identify at most d defective items. Motivated by applications in net ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Nonadaptive group testing involves grouping arbitrary subsets of n items into different pools. Each pool is then tested and defective items are identified. A fundamental question involves minimizing the number of pools required to identify at most d defective items. Motivated by applications in network tomography, sensor networks and infection propagation we formulate group testing problems on graphs. Unlike conventional group testing problems each group here must conform to the constraints imposed by a graph. For instance, items can be associated with vertices and each pool is any set of nodes that must be path connected. In this paper we associate a test with a random walk. In this context conventional group testing corresponds to the special case of a complete graph on n vertices. For interesting classes of graphs we arrive at a rather surprising result, namely, that the number of tests required to identify d defective items is substantially similar to that required in conventional group testing problems, where no such constraints on pooling is imposed. Specifically, if T (n) corresponds to the mixing time of the graph G, we show that with m = O(d 2 T 2 (n) log(n/d)) nonadaptive tests, one can identify the defective items. Consequently, for the ErdősRényi random graph G(n, p), as well as expander graphs with constant spectral gap, it follows that m = O(d 2 log 3 n) nonadaptive tests
Data structures for rangeaggregate extent queries
 In Proc. 20th CCCG
, 2008
"... A fundamental and wellstudied problem in computational geometry is range searching, where the goal is to preprocess a set, S, of geometric objects (e.g., points in the plane) so that the subset S ′ ⊆ S that is contained in a query range (e.g., an axesparallel rectangle) can be reported efficientl ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
A fundamental and wellstudied problem in computational geometry is range searching, where the goal is to preprocess a set, S, of geometric objects (e.g., points in the plane) so that the subset S ′ ⊆ S that is contained in a query range (e.g., an axesparallel rectangle) can be reported efficiently. However, in many situations, what is of interest is to generate a more informative “summary ” of the output, obtained by applying a suitable aggregation function on S ′. Examples of such aggregation functions include count, sum, min, max, mean, median, mode, and topk that are usually computed on a set of weights defined suitably on the objects. Such rangeaggregate query problems have been the subject of much recent research in both the database and the computational geometry communities. In this paper, we further generalize this line of work by considering aggregation functions on pointsets that measure the extent or “spread ” of the objects in the retrieved set S ′. The functions considered here include closest pair, diameter, and width. The challenge here is that these aggregation functions (unlike, say, count) are not efficiently decomposable in the sense that the answer to S ′ cannot be inferred easily from answers to subsets that induce a partition