Results 1  10
of
23
SPRNG: A Scalable Library for Pseudorandom Number Generation
"... In this article we present background, rationale, and a description of the Scalable Parallel Random
Number Generators (SPRNG) library. We begin by presenting some methods for parallel pseudorandom number generation. We will focus on methods based on parameterization, meaning that we will not conside ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
In this article we present background, rationale, and a description of the Scalable Parallel Random
Number Generators (SPRNG) library. We begin by presenting some methods for parallel pseudorandom number generation. We will focus on methods based on parameterization, meaning that we will not consider splitting methods such as the leapfrog or blocking methods. We describe in detail
parameterized versions of the following pseudorandom number generators: (i) linear congruential
generators, (ii) shiftregister generators, and (iii) laggedFibonacci generators. We briey describe
the methods, detail some advantages and disadvantages of each method, and recount results from
number theory that impact our understanding of their quality in parallel applications.
SPRNG was designed around the uniform implementation of dierent families of parameterized random number
generators. We then present a short description of
SPRNG. The description contained within this
document is meant only to outline the rationale behind and the capabilities of SPRNG. Much more
information, including examples and detailed documentation aimed at helping users with putting
and using SPRNG on scalable systems is available at the URL:
http://sprng.cs.fsu.edu/RNG. In this description of SPRNG we discuss the random number generator library as well as the suite of
tests of randomness that is an integral part of SPRNG. Random number tools for parallel Monte
Carlo applications must be subjected to classical as well as new types of empirical tests of ran
domness to eliminate generators that show defects when used in scalable environments.
Parallel implementation of stochastic simulation for largescale cellular processes
 In: Proceedings of the 8th International Conference on HighPerformance Computing in AsiaPacific Region
, 2005
"... Abstract ..."
(Show Context)
Benchmark Of The Unrolling Of Pseudorandom Numbers Generators
, 2002
"... Research software involving stochastic behaviour often requires an enormous quantity of random numbers. In addition to the quality of the pseudorandom number generator (PRNG), the speed of the algorithm and the ease of its implementation are common practical aspects. In this work we will discuss how ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Research software involving stochastic behaviour often requires an enormous quantity of random numbers. In addition to the quality of the pseudorandom number generator (PRNG), the speed of the algorithm and the ease of its implementation are common practical aspects. In this work we will discuss how to optimize the access speed to random numbers independently from the generation algorithm using a lookup table. This idea was exploited in the late fifties, when the Rand Corporation started to propose sets of ready to use pseudorandom numbers (PRNs). The need of larger and larger sets of PRNs cancelled the possibility of storing those sets into the memory of our past computers, even supercomputers were not able to store tables with hundreds of millions of PRNs. In this paper we propose an implementation technique in order to speedup any kind of PRNG taking into account the capacities of current computers and microcomputers. The speed of our solution stems from the classical unrolling optimization technique, it is named the URNG technique (Unrolled Random Number Generator). Random numbers are first generated in source code, then precompiled and stored inside the RAM of inexpensive computers at the executable loading time. With this technique random numbers need to be computed only once. The UNRG technique is compliant with parallel computing. Limits and effects on speed and sensitivity are explored over 4 computer generations with a simple Monte Carlo simulation. Every research field using stochastic computation can be concerned by this software optimization technique.
New alternate ringcoupled map for multirandom number generation
 Journal of Nonlinear Systems and Applications
"... Abstract. An improved Lozi function with alternate coefficients has been proposed. The modifications in the model allow to remove the holes in the attractor which are not desirable, but appeared in the previous Lozi function; in this way, an everywhere dense attractor can be obtained. Moreover, th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. An improved Lozi function with alternate coefficients has been proposed. The modifications in the model allow to remove the holes in the attractor which are not desirable, but appeared in the previous Lozi function; in this way, an everywhere dense attractor can be obtained. Moreover, the strong sensitivity to the type of binarisation (conversion of real values to 0 and 1) has been demonstrated; this conversion to binary numbers is instrumental to apply the NIST tests for randomness. The results have been validated and compared via NIST tests, for the different methods of quantization. Finally, is has been verified that the multirandom properties of the output signal have been improved thanks to the following strategies: undersampling of the output signal, and the system order increasing.
A GRID COMPUTING INFRASTRUCTURE FOR MONTE CARLO APPLICATIONS BY
"... The Office of Graduate Studies had verified and approved the above named committee members. To my father and mother. iii ACKNOWLEDGEMENTS First of all, I would like to express my sincere thanks and appreciation to my advisor and “cheer leader, ” Dr. Michael Mascagni for his continuous encouragement ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The Office of Graduate Studies had verified and approved the above named committee members. To my father and mother. iii ACKNOWLEDGEMENTS First of all, I would like to express my sincere thanks and appreciation to my advisor and “cheer leader, ” Dr. Michael Mascagni for his continuous encouragement and kind help in my Ph.D. research and study. Also, I give thanks to all my other members in
Parallel Random Number Generators in Java
, 2003
"... Scientific computing has long been pushing the boundaries of computational requirements in computer science. An important aspect of scientific computing is the generation of large quantities of random numbers, especially in parallel to take advantage of parallel architectures. Many science and engin ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Scientific computing has long been pushing the boundaries of computational requirements in computer science. An important aspect of scientific computing is the generation of large quantities of random numbers, especially in parallel to take advantage of parallel architectures. Many science and engineering programs require random numbers for applications like Monte Carlo simulation. Such an environment suitable for parallel computing is Java, though rarely used for scientific applications due to its perceived slowness when compared to complied languages like C. Through research and recommendations, Java is slowly being shaped into a viable language for such computational intense applications. Java has the potential for such large scale applications, since it is a modern language with a large programmer base and many well received features such as builtin support for parallelism using threads. With improved performance from better compilers, Java is becoming more commonly used for scientific computing but Java still lacks a number of features like optimised scientific software libraries. This project looks at the effectiveness and efficiency of implementing a parallel random number
On the Fast Generation of Longperiod Pseudorandom Number Sequences
"... AbstractMonte Carlo simulations and other scientific applications that depend on random numbers are increasingly implemented in parallel configurations in programmable hardware. Highquality pseudorandom number generators (PRNGs), such as the Mersenne Twister, are based on binary linear recurrenc ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractMonte Carlo simulations and other scientific applications that depend on random numbers are increasingly implemented in parallel configurations in programmable hardware. Highquality pseudorandom number generators (PRNGs), such as the Mersenne Twister, are based on binary linear recurrence equations. They have extremely long periods (more than 2 1024 numbers generated before the entire sequence repeats) and wellproven statistical properties. Many software implementations of such 'longperiod' PRNGs exist, but hardware implementations are rare. We develop optimized, resourceefficient parallel architectures for longperiod PRNGs that generate multiple independent streams by exploiting the underlying algorithm as well as hardwarespecific architectural features. We demonstrate the utility of the framework through parallelized implementations of three types of PRNGs on a fieldprogrammable gate array (FPGA). The area/throughput performance is impressive: for example, compared clockforclock with a previous FPGA implementation, a "twoparallelized" 32bit Mersenne Twister uses 41% fewer resources. It can also scale to 350 MHz for a throughput of 22.4 Gbps, which is 5.5x faster than the previous implementation and 7.1x faster than a dedicated software implementation. The quality of generated random numbers is verified with standard statistical test batteries. To complete testing, we present a realworld application study by coupling our parallel hardware RNGs to the Ziggurat algorithm for generating normal random variables. The availability of fast longperiod random number generators accelerates hardwarebased scientific simulations and allows them to scale to greater complexities.
An AgentBased Approach to Immune Modelling: Priming Individual Response
"... Abstract—This study focuses on examining why the range of experience with respect to HIV infection is so diverse, especially in regard to the latency period. An agentbased approach in modelling the infection is used to extract highlevel behaviour which cannot be obtained analytically from the set ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—This study focuses on examining why the range of experience with respect to HIV infection is so diverse, especially in regard to the latency period. An agentbased approach in modelling the infection is used to extract highlevel behaviour which cannot be obtained analytically from the set of interaction rules at the cellular level. A prototype model encompasses local variation in baseline properties, contributing to the individual disease experience, and is included in a network which mimics the chain of lymph nodes. The model also accounts for stochastic events such as viral mutations. The size and complexity of the model require major computational effort and parallelisation methods are used. Keywords—HIV, Immune modelling, Agentbased system, individual response.
Copyright and Reprint Permissions
, 2011
"... Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limit of U.S. copyright law for private use of patrons those articles in this volume that carry a code at the bottom of the first page, provided the per‐copy fee indicated in the code is paid through ..."
Abstract
 Add to MetaCart
Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the limit of U.S. copyright law for private use of patrons those articles in this volume that carry a code at the bottom of the first page, provided the per‐copy fee indicated in the code is paid through Copyright Clearance Center, 222 Rosewood Drive,
Nomenclature
"... The mechanism of randomization in late boundary layer transition is studied carefully by high order DNS. The randomization was originally considered as a result of background noise. It was addressed that the large ring structure is affected by background noises first and then the change of large rin ..."
Abstract
 Add to MetaCart
(Show Context)
The mechanism of randomization in late boundary layer transition is studied carefully by high order DNS. The randomization was originally considered as a result of background noise. It was addressed that the large ring structure is affected by background noises first and then the change of large ring structure affects the small length scale quickly, which directly leads to randomization and formation of turbulence. However, our new DNS results show that all small length scales are generated by high shear layers without exception. The asymmetric structure is firstly happened at the middle level of streamwise and spanwise direction, which eventually deforms the shape of shear layer. The consequence is that some of the small length scales around high shear disappear and the small vortex legs detached from the wall through separation, so that the flow loses symmetry and becomes randomized.