Results 1  10
of
21
A Provably Secure True Random Number Generator with Builtin Tolerance to Active Attacks
 IEEE Transactions on Computers
, 2007
"... This paper is a contribution to the theory of true random number generators based on sampling phase jitter in oscillator rings. After discussing several misconceptions and apparently insurmountable obstacles, we propose a general model which, under mild assumptions, will generate provably random bit ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
This paper is a contribution to the theory of true random number generators based on sampling phase jitter in oscillator rings. After discussing several misconceptions and apparently insurmountable obstacles, we propose a general model which, under mild assumptions, will generate provably random bits with some tolerance to adversarial manipulation and running in the megabitpersecond range. A key idea throughout the paper is the fill rate, which measures the fraction of the time domain in which the analog output signal is arguably random. Our study shows that an exponential increase in the number of oscillators is required to obtain a constant factor improvement in the fill rate. Yet, we overcome this problem by introducing a postprocessing step which consists of an application of an appropriate resilient function. These allow the designer to extract random samples only from a signal with only moderate fill rate and therefore many fewer oscillators than in other designs. Lastly, we develop faultattack models, and we employ the properties of resilient functions to withstand such attacks. All of our analysis is based on rigorous methods, enabling us to develop a framework in which we accurately quantify the performance and the degree of resilience of the design. Key Words: True (and pseudo) random number generators, resilient functions, cryptography. 1
Hybrid Set Domains to Strengthen Constraint Propagation and Reduce Symmetries
 In Proceedings of the 10th International Conference on Principles and Practice of Constraint Programming (CP), volume 3258 of LNCS
, 2004
"... Abstract. In CP literature combinatorial design problems such as sport scheduling, Steiner systems, errorcorrecting codes and more, are typically solved using Finite Domain (FD) models despite often being more naturally expressed as Finite Set (FS) models. Existing FS solvers have difficulty with s ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Abstract. In CP literature combinatorial design problems such as sport scheduling, Steiner systems, errorcorrecting codes and more, are typically solved using Finite Domain (FD) models despite often being more naturally expressed as Finite Set (FS) models. Existing FS solvers have difficulty with such problems as they do not make strong use of the ubiquitous set cardinality information. We investigate a new approach to strengthen the propagation of FS constraints in a tractable way: extending the domain representation to more closely approximate the true domain of a set variable. We show how this approach allows us to reach a stronger level of consistency, compared to standard FS solvers, for arbitrary constraints as well as providing a mechanism for implementing certain symmetry breaking constraints. By experiments on Steiner Systems and error correcting codes, we demonstrate that our approach is not only an improvement over standard FS solvers but also an improvement on recently published results using FD 0/1 matrix models as well. 1
Indexing Information for Data Forensics
, 2005
"... We introduce novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified. We give forensic constructions for several fundamental data structures, including arrays, linked li ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We introduce novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified. We give forensic constructions for several fundamental data structures, including arrays, linked lists, binary search trees, skip lists, and hash tables. Some of our constructions are based on a new reducedrandomness construction for nonadaptive combinatorial group testing.
DS: Improved Combinatorial Group Testing Algorithms for RealWorld Problem Sizes 2005
"... Abstract. We study practically efficient methods for performing combinatorial group testing. We present efficient nonadaptive and twostage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Abstract. We study practically efficient methods for performing combinatorial group testing. We present efficient nonadaptive and twostage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our twostage algorithm matches the informationtheoretic lower bound for the number of tests in a combinatorial group testing regimen.
Optimal Scheduling for Disconnected Cooperation
, 2001
"... We consider a distributed environment consisting of n processors that need to perform t tasks. We assume that communication is initially unavailable and that processors begin work in isolation. At some unknown point of time an unknown collection of processors may establish communication. Before proc ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We consider a distributed environment consisting of n processors that need to perform t tasks. We assume that communication is initially unavailable and that processors begin work in isolation. At some unknown point of time an unknown collection of processors may establish communication. Before processors begin communication they execute tasks in the order given by their schedules. Our goal is to schedule work of isolated processors so that when communication is established for the rst time, the number of redundantly executed tasks is controlled. We quantify worst case redundancy as a function of processor advancements through their schedules. In this work we rene and simplify an extant deterministic construction for schedules with n t, and we develop a new analysis of its waste. The new analysis shows that for any pair of schedules, the number of redundant tasks can be controlled for the entire range of t tasks. Our new result is asymptotically optimal: the tails of these schedules are within a 1 +O(n 1 4 ) factor of the lower bound. We also present two new deterministic constructions one for t n, and the other for t n 3=2 , which substantially improve pairwise waste for all prexes of length t= p n, and oer near optimal waste for the tails of the schedules. Finally, we present bounds for waste of any collection of k 2 processors for both deterministic and randomized constructions. 1
NonAdaptive Fault Diagnosis for AllOptical Networks via Combinatorial Group Testing on Graphs
"... Abstract—We consider the fault diagnosis problem in alloptical networks, focusing on probing schemes to detect faults. Our work concentrates on nonadaptive probing schemes, in order to meet the stringent time requirements for fault recovery. This fault diagnosis problem motivates a new technical fr ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract—We consider the fault diagnosis problem in alloptical networks, focusing on probing schemes to detect faults. Our work concentrates on nonadaptive probing schemes, in order to meet the stringent time requirements for fault recovery. This fault diagnosis problem motivates a new technical framework that we introduce: group testing with graphbased constraints. Using this framework, we develop several new probing schemes to detect network faults. The efficiency of our schemes often depends on the network topology; in many cases we can show that our schemes are nearoptimal by providing tight lower bounds. I.
Improved combinatorial group testing for realworld problem sizes
 In Workshop on Algorithms and Data Structures (WADS), Lecture Notes Comput. Sci
, 2005
"... {eppstein, goodrich, dan}(at)ics.uci.edu Abstract. We study practically efficient methods for performing combinatorial group testing. We present efficient nonadaptive and twostage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defe ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
{eppstein, goodrich, dan}(at)ics.uci.edu Abstract. We study practically efficient methods for performing combinatorial group testing. We present efficient nonadaptive and twostage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our twostage algorithm matches the information theoretic lower bound for the number of tests in a combinatorial group testing regimen.
Straggler identification in roundtrip data streams via Newton’s identities and invertible Bloom filters
 IEEE Transactions on Knowledge and Data Engineering
, 2011
"... Abstract. We study the straggler identification problem, in which an algorithm must determine the identities of the remaining members of a set after it has had a large number of insertion and deletion operations performed on it, and now has relatively few remaining members. The goal is to do this in ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. We study the straggler identification problem, in which an algorithm must determine the identities of the remaining members of a set after it has had a large number of insertion and deletion operations performed on it, and now has relatively few remaining members. The goal is to do this in o(n) space, where n is the total number of identities. Straggler identification has applications, for example, in determining the unacknowledged packets in a highbandwidth multicast data stream. We provide a deterministic solution to the straggler identification problem that uses only O(d log n) bits, based on a novel application of Newton’s identities for symmetric polynomials. This solution can identify any subset of d stragglers from a set of n O(log n)bit identifiers, assuming that there are no false deletions of identities not already in the set. Indeed, we give a lower bound argument that shows that any smallspace deterministic solution to the straggler identification problem cannot be guaranteed to handle false deletions. Nevertheless, we provide a simple randomized solution using O(d log nlog(1/ǫ)) bits that can maintain a multiset and solve the straggler identification problem, tolerating false deletions, where ǫ> 0 is a userdefined parameter bounding the probability of an incorrect response. This randomized solution is based on a new type of Bloom filter, which we call the invertible Bloom filter.
Bound Consistency for Binary LengthLex Set Constraints
 In Proceedings of the National Conference on Artificial Intelligence (AAAI
, 2008
"... The lengthlex representation has been recently proposed for representing sets in Constraint Satisfaction Problems. The lengthlex representation directly captures cardinality information, provides a total ordering for sets, and allows bound consistency on unary constraints to be enforced in time Õ( ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
The lengthlex representation has been recently proposed for representing sets in Constraint Satisfaction Problems. The lengthlex representation directly captures cardinality information, provides a total ordering for sets, and allows bound consistency on unary constraints to be enforced in time Õ(c), where c is the cardinality of the set. However, no algorithms were given to enforce bound consistency on binary constraints. This paper addresses this open issue. It presents algorithms to enforce bound consistency on disjointness and cardinality constraints in time O(c 3). Moreover, it presents a generic boundconsistency algorithm for any binary constraint S which requires Õ(c2) calls to a feasibility subroutine for S.
Wavelength AddDrop Multiplexing For Minimizing Sonet Adms
, 1999
"... In a SONET ring, assignment of sourcetodestination circuits to wavelengths must respect traffic requirements and minimize both the number of wavelengths and the amount of terminal conversion equipment. When traffic requirements are approximately equal on all sourcedestination circuits, the assign ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In a SONET ring, assignment of sourcetodestination circuits to wavelengths must respect traffic requirements and minimize both the number of wavelengths and the amount of terminal conversion equipment. When traffic requirements are approximately equal on all sourcedestination circuits, the assignment can be modeled as a graph decomposition problem. In this setting, techniques from combinatorial design theory can be applied. These techniques are introduced in a simpler form when every sourcedestination circuit requires one quarter of a wavelength. More sophisticated designtheoretic methods are then developed to produce the required decompositions for all sufficiently large ring sizes, when each sourcedestination circuit requires one eighth of a wavelength.