Results 1  10
of
12
Analysis and Improvement of a Pseudorandom Number Generator for
 EPC Gen2 Tags. Financial Cryptography and Data Security, LNCS
, 2010
"... Abstract. The EPC Gen2 is an international standard that proposes the use of Radio Frequency Identification (RFID) in the supply chain. It is designed to balance cost and functionality. The development of Gen2 tags faces, in fact, several challenging constraints such as cost, compatibility regulatio ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Abstract. The EPC Gen2 is an international standard that proposes the use of Radio Frequency Identification (RFID) in the supply chain. It is designed to balance cost and functionality. The development of Gen2 tags faces, in fact, several challenging constraints such as cost, compatibility regulations, power consumption, and performance requirements. As a consequence, security on board of Gen2 tags is often minimal. It is, indeed, mainly based on the use of on board pseudorandomness. This pseudorandomness is used to blind the communication between readers and tags; and to acknowledge the proper execution of passwordprotected operations. Gen2 manufacturers are often reluctant to show the design of their pseudorandom generators. Security through obscurity has always been ineffective. Some open designs have also been proposed. Most of them fail, however, to prove their correctness. We analyze a recent proposal presented in the literature and demonstrate that it is, in fact, insecure. We propose an alternative mechanism that fits the Gen2 constraints and satisfies the security requirements. 1
Games for Extracting Randomness ∗
"... Randomness is a necessary ingredient in various computational tasks and especially in Cryptography, yet many existing mechanisms for obtaining randomness suffer from numerous problems. We suggest utilizing the behavior of humans while playing competitive games as an entropy source, in order to enhan ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Randomness is a necessary ingredient in various computational tasks and especially in Cryptography, yet many existing mechanisms for obtaining randomness suffer from numerous problems. We suggest utilizing the behavior of humans while playing competitive games as an entropy source, in order to enhance the quality of the randomness in the system. This idea has two motivations: (i) results in experimental psychology indicate that humans are able to behave quite randomly when engaged in competitive games in which a mixed strategy is optimal, and (ii) people have an affection for games, and this leads to longer play yielding more entropy overall. While the resulting strings are not perfectly random, we show how to integrate such a game into a robust pseudorandom generator that enjoys backward andforward security. We construct a game suitable for randomness extraction, and test users playing patterns. The results show that in less than two minutes a human can generate 128 bits that are 2 −64close to random, even on a limited computer such as a PDA that might have no other entropy source. As proof of concept, we supply a complete working software for a robust PRG. It generates random sequences based solely on human game play, and thus does not depend on the Operating System or any external factor.
An Experimental Study of Sorting and Branch Prediction
"... Sorting is one of the most important and well studied problems in Computer Science. Many good algorithms are known which offer various tradeoffs in efficiency, simplicity, memory use, and other factors. However, these algorithms do not take into account features of modern computer architectures tha ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Sorting is one of the most important and well studied problems in Computer Science. Many good algorithms are known which offer various tradeoffs in efficiency, simplicity, memory use, and other factors. However, these algorithms do not take into account features of modern computer architectures that significantly influence performance. Caches and branch predictors are two such features, and while there has been a significant amount of research into the cache performance of general purpose sorting algorithms, there has been little research on their branch prediction properties. In this paper we empirically examine the behaviour of the branches in all the most common sorting algorithms. We also consider the interaction of cache optimization on the predictability of the branches in these algorithms. We find insertion sort to have the fewest branch mispredictions of any comparisonbased sorting algorithm, that bubble and shaker sort operate in a fashion which makes their branches highly unpredictable, that the unpredictability of shellsort’s branches improves its caching behaviour and that several cache optimizations have little effect on mergesort’s branch mispredictions. We find also that optimizations to quicksort – for example the choice of pivot – have a strong influence on the predictability of its branches. We point out a simple way of removing branch instructions from a classic heapsort implementation, and show also that unrolling a loop in a cache optimized heapsort implementation improves the predicitability of its branches. Finally, we note that when sorting random data twolevel adaptive branch predictors are usually no better than simpler bimodal predictors. This is despite the fact that twolevel adaptive predictors are almost always superior to bimodal predictors in general.
random: An R package for true random numbers
, 2006
"... Simulation techniques are a core component of scientific computing and, more specifically, computational statistics. All simulation methods—Monte Carlo methods, bootstrapping, estimation by simulation to name but a few—rely on ‘computergenerated randomness ’ (more on this below). In practice, this ..."
Abstract
 Add to MetaCart
Simulation techniques are a core component of scientific computing and, more specifically, computational statistics. All simulation methods—Monte Carlo methods, bootstrapping, estimation by simulation to name but a few—rely on ‘computergenerated randomness ’ (more on this below). In practice, this means sequences of random numbers. Generating ‘good ’ (for a suitable metric) random
Recursive hashing and onepass, . . .
, 2007
"... Many applications use sequences of n consecutive symbols (ngrams). We review ngram hashing and prove that recursive hash families are pairwise independent at best. We prove that hashing by irreducible polynomials is pairwise independent whereas hashing by cyclic polynomials is quasipairwise indep ..."
Abstract
 Add to MetaCart
Many applications use sequences of n consecutive symbols (ngrams). We review ngram hashing and prove that recursive hash families are pairwise independent at best. We prove that hashing by irreducible polynomials is pairwise independent whereas hashing by cyclic polynomials is quasipairwise independent: we make it pairwise independent by discarding n − 1 bits. One application of hashing is to estimate the number of distinct ngrams, a viewsize estimation problem. While view sizes can be estimated by sampling under statistical assumptions, we desire a statistically unassuming algorithm with universally valid accuracy bounds. Most related work has focused on repeatedly hashing the data, which is prohibitive for large data sources. We prove that a onepass onehash algorithm is sufficient for accurate estimates if the hashing is sufficiently independent. For example, we can improve by a factor of 2 the theoretical bounds on estimation accuracy by replacing pairwise independent hashing by 4wise independent hashing. We show that recursive random hashing is sufficiently independent in practice. Maybe surprisingly, our experiments showed that hashing by cyclic polynomials, which is only quasipairwise independent, sometimes outperformed 10wise independent hashing while being twice as fast. For comparison, we measured the time to obtain exact ngram counts using suffix arrays and show that, while we used hardly any storage, we were an order of magnitude faster. The experiments used a large collection of English text from Project Gutenberg as well as synthetic data.
Improved Initialisation for Centroidal Voronoi Tessellation and Optimal Delaunay Triangulation
, 2012
"... Centroidal Voronoi tessellations and optimal Delaunay triangulations can be approximated efficiently by nonlinear optimisation algorithms. This paper demonstrates that the point distribution used to initialise the optimisation algorithms is important. Compared to conventional random initialisation, ..."
Abstract
 Add to MetaCart
Centroidal Voronoi tessellations and optimal Delaunay triangulations can be approximated efficiently by nonlinear optimisation algorithms. This paper demonstrates that the point distribution used to initialise the optimisation algorithms is important. Compared to conventional random initialisation, certain lowdiscrepancy point distributions help convergence towards more spatially regular results and require fewer iterations for planar and volumetric tessellations.
Foundation award #1012798. Transistor Scaled HPC Application Performance
"... We propose a radically new, biologically inspired, model of extreme scale computer on which application performance automatically scales with the transistor count even in the face of component failures. Today high performance computers are massively parallel systems composed of potentially hundreds ..."
Abstract
 Add to MetaCart
We propose a radically new, biologically inspired, model of extreme scale computer on which application performance automatically scales with the transistor count even in the face of component failures. Today high performance computers are massively parallel systems composed of potentially hundreds of thousands of traditional processor cores, formed from trillions of transistors, consuming megawatts of power. Unfortunately, increasing the number of cores in a system, unlike increasing clock frequencies, does not automatically translate to application level improvements. No general autoparallelization techniques or tools exist for HPC systems. To obtain application improvements, HPC application programmers must manually cope with the challenge of multicore programming and the significant drop in reliability associated with the sheer number of transistors. Drawing on biological inspiration, the basic premise behind this work is that computation can be dramatically accelerated by integrating a very largescale, systemwide, predictive associative memory into the operation of the computer. The memory effectively turns computation into a form of pattern recognition and prediction whose result can be used to avoid significant fractions of computation. To be effective the expectation is that the memory will require billions of concurrent devices akin to biological cortical systems, where each device implements a small amount of storage,
The British Psychological Society
"... www.bpsjournals.co.uk Randomized controlled trial of a brief researchbased intervention promoting fruit and vegetable consumption ..."
Abstract
 Add to MetaCart
www.bpsjournals.co.uk Randomized controlled trial of a brief researchbased intervention promoting fruit and vegetable consumption
Abstract
, 2008
"... In multimedia, text or bioinformatics databases, applications query sequences of n consecutive symbols called ngrams. Estimating the number of distinct ngrams is a viewsize estimation problem. While view sizes can be estimated by sampling under statistical assumptions, we desire an unassuming alg ..."
Abstract
 Add to MetaCart
In multimedia, text or bioinformatics databases, applications query sequences of n consecutive symbols called ngrams. Estimating the number of distinct ngrams is a viewsize estimation problem. While view sizes can be estimated by sampling under statistical assumptions, we desire an unassuming algorithm with universally valid accuracy bounds. Most related work has focused on repeatedly hashing the data, which is prohibitive for large data sources. We prove that a onepass onehash algorithm is sufficient for accurate estimates if the hashing is sufficiently independent. To reduce costs further, we investigate recursive random hashing algorithms and show that they are sufficiently independent in practice. We compare our running times with exact counts using suffix arrays and show that, while we use hardly any storage, we are an order of magnitude faster. The approach further is extended to a onepass/onehash computation of ngram entropy and iceberg counts. The experiments use a large collection of English text from the Gutenberg Project as well as synthetic data. 1