Results 1  10
of
44
An Immunological Model of Distributed Detection and Its Application to Computer Security
, 1999
"... This dissertation explores an immunological model of distributed detection, called negative detection, and studies its performance in the domain of intrusion detection on computer networks. The goal of the detection system is to distinguish between illegitimate behaviour (nonself ), and legitimate b ..."
Abstract

Cited by 86 (5 self)
 Add to MetaCart
This dissertation explores an immunological model of distributed detection, called negative detection, and studies its performance in the domain of intrusion detection on computer networks. The goal of the detection system is to distinguish between illegitimate behaviour (nonself ), and legitimate behaviour (self ). The detection system consists of sets of negative detectors that detect instances of nonself; these detectors are distributed across multiple locations. The negative detection model was developed previously; this research extends that previous work in several ways. Firstly, analyses are derived for the negative detection model. In particular, a framework for explicitly incorporating distribution is developed, and is used to demonstrate that negative detection is both scalable and robust. Furthermore, it is shown that any scalable distributed detection system that requires communication (memory sharing) is always less robust than a system that does not require communication...
SPRNG: A Scalable Library for Pseudorandom Number Generation
"... In this article we present background, rationale, and a description of the Scalable Parallel Random
Number Generators (SPRNG) library. We begin by presenting some methods for parallel pseudorandom number generation. We will focus on methods based on parameterization, meaning that we will not conside ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
In this article we present background, rationale, and a description of the Scalable Parallel Random
Number Generators (SPRNG) library. We begin by presenting some methods for parallel pseudorandom number generation. We will focus on methods based on parameterization, meaning that we will not consider splitting methods such as the leapfrog or blocking methods. We describe in detail
parameterized versions of the following pseudorandom number generators: (i) linear congruential
generators, (ii) shiftregister generators, and (iii) laggedFibonacci generators. We briey describe
the methods, detail some advantages and disadvantages of each method, and recount results from
number theory that impact our understanding of their quality in parallel applications.
SPRNG was designed around the uniform implementation of dierent families of parameterized random number
generators. We then present a short description of
SPRNG. The description contained within this
document is meant only to outline the rationale behind and the capabilities of SPRNG. Much more
information, including examples and detailed documentation aimed at helping users with putting
and using SPRNG on scalable systems is available at the URL:
http://sprng.cs.fsu.edu/RNG. In this description of SPRNG we discuss the random number generator library as well as the suite of
tests of randomness that is an integral part of SPRNG. Random number tools for parallel Monte
Carlo applications must be subjected to classical as well as new types of empirical tests of ran
domness to eliminate generators that show defects when used in scalable environments.
Random Number Generators for Parallel Computers
 The NHSE Review
, 1997
"... Random number generators are used in many applications, from slot machines to simulations of nuclear reactors. For many computational science applications, such as Monte Carlo simulation, it is crucial that the generators have good randomness properties. This is particularly true for largescale ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Random number generators are used in many applications, from slot machines to simulations of nuclear reactors. For many computational science applications, such as Monte Carlo simulation, it is crucial that the generators have good randomness properties. This is particularly true for largescale simulations done on highperformance parallel computers. Good random number generators are hard to find, and many widelyused techniques have been shown to be inadequate. Finding highquality, efficient algorithms for random number generation on parallel computers is even more difficult. Here we present a review of the most commonlyused random number generators for parallel computers, and evaluate each generator based on theoretical knowledge and empirical tests. In conclusion, we provide recommendations for using random number generators on parallel computers. Outline This review is organized as follows: A brief summary of the findings of this review is first presented, giving an overview of the use of parallel random number generators and a list of recommended algorithms. Section 1 is an introduction to random number generators and their use in computer simulations on parallel computers. Section 2 is a summary of the methods used to test and evaluate random number generators, on both sequential and parallel computers. Section 3 gives an overview of the main algorithms used to implement random number generators on sequential computers, provides examples of software implementations of the algorithms, and states any known problems with the algorithms or implementations. Section 4 gives a description of the most common methods used to parallelize the sequential algorithms, provides examples of software implementing these algorithms, and states any known problems ...
Parallel linear congruential generators with prime moduli
 Parallel Computing
, 1998
"... Abstract. Linear congruential generators (LCGs) remain the most popular method of pseudorandom number generation on digital computers. Ease of implementation has favored implementing LCGs with poweroftwo moduli. However, prime modulus LCGs are superior in quality to poweroftwo modulus LCGs, and ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Abstract. Linear congruential generators (LCGs) remain the most popular method of pseudorandom number generation on digital computers. Ease of implementation has favored implementing LCGs with poweroftwo moduli. However, prime modulus LCGs are superior in quality to poweroftwo modulus LCGs, and the use of a Mersenne prime minimizes the computational cost of generation. When implemented for parallel computation, quality becomes an even more compelling issue. We use a fullperiod exponential sum as the measure of stream independence and present a method for producing provably independent streams of LCGs in parallel by utilizing an explicit parameterization of all of the primitive elements modulo a given prime. The minimization of this measure of independence further motivates an algorithm required in the explicit parameterization. We describe and analyze this algorithm and describe its use in a parallel LCG package. 1. Introduction. Perhaps the oldest generator still in use for the generation of uniformly distributed integers is the linear congruential generator (LCG). This generator is sometimes
Sequential Monte Carlo Techniques for the Solution of Linear Systems
 Journal of Scientific Computing
, 1994
"... Given a linear system Ax = b, where x is an mvector, direct numerical methods, such as Gaussian elimination, take time O(m 3) to find x. Iterative numerical methods, such as the GaussSeidel method or SOR, reduce the system to the form whence x = a + Hx, x = ∑r=0ּHra; and then apply the iterations ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Given a linear system Ax = b, where x is an mvector, direct numerical methods, such as Gaussian elimination, take time O(m 3) to find x. Iterative numerical methods, such as the GaussSeidel method or SOR, reduce the system to the form whence x = a + Hx, x = ∑r=0ּHra; and then apply the iterations x 0 = a, x s+1 = a + Hx s, until sufficient accuracy is achieved; this takes time O(m 2) per iteration. They generate the truncated sums s xs = ∑r=0ּHra. The usual plain Monte Carlo approach uses independent “random walks, ” to give an approximation to the truncated sum x s, taking time O(m) per random step. Unfortunately, millions of random steps are typically needed to achieve reasonable accuracy (say, 1 % r.m.s. error). Nevertheless, this is what has had to be done, if m is itself of the order of a million or more. The alternative presented here, is to apply a sequential Monte Carlo method, in which the sampling scheme is iteratively improved. Simply put, if x = y + z, where y is a current estimate of x, then its correction, z, satisfies z = d + Hz, where d = a + Hy – y. At each stage, one uses plain Monte Carlo to estimate z, and so, the new estimate y. If the sequential computation of d is itself approximated, numerically or stochastically, then the expected time for this process to reach a given accuracy is again O(m) per random step; but the number of steps is dramatically reduced [improvement factors of about 5,000, 26,000, and 700 have been obtained in preliminary
Random Number Generators for Parallel Applications
 in Monte Carlo Methods in Chemical Physics
, 1998
"... this article is devoted, because these com1 putations require the highest quality of random numbers. The ability to do a multidimensional integral relies on properties of uniformity of ntuples of random numbers and/or the equivalent property that random numbers be uncorrelated. The quality aspect i ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
this article is devoted, because these com1 putations require the highest quality of random numbers. The ability to do a multidimensional integral relies on properties of uniformity of ntuples of random numbers and/or the equivalent property that random numbers be uncorrelated. The quality aspect in the other uses is normally less important simply because the models are usually not all that precisely specified. The largest uncertainties are typically due more to approximations arising in the formulation of the model than those caused by lack of randomness in the random number generator. In contrast, the first class of applications can require very precise solutions. Increasingly, computers are being used to solve very welldefined but hard mathematical problems. For example, as Dirac [1] observed in 1929, the physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are completely known and it is only necessary to find precise methods for solving the equations for complex systems. In the intervening years fast computers and new computational methods have come into existence. In quantum chemistry, physical properties must be calculated to "chemical accuracy" (say 0.001 Rydbergs) to be relevant to physical properties. This often requires a relative accuracy of 10
Theory, Techniques, And Experiments In Solving Recurrences In Computer Programs
, 1997
"... ... work. In the sixth chapter, we consider the application of these same techniques focused on obtaining parallelism in outer timestepping loops. In the final chapter, we draw this work to a conclusion and discuss future directions in parallelizing compiler technology. ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
... work. In the sixth chapter, we consider the application of these same techniques focused on obtaining parallelism in outer timestepping loops. In the final chapter, we draw this work to a conclusion and discuss future directions in parallelizing compiler technology.
On the Automatic Parallelization of Sparse and Irregular Fortran Codes
, 1997
"... This paper studies how well automatic parallelization techniques work on a collection of real codes with sparse and irregular access patterns. In conducting this work, we have compared existing technology in the commercial parallelizer PFA from SGI with the Polaris restructurer [7]. In cases This ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
This paper studies how well automatic parallelization techniques work on a collection of real codes with sparse and irregular access patterns. In conducting this work, we have compared existing technology in the commercial parallelizer PFA from SGI with the Polaris restructurer [7]. In cases This work is supported by U.S. Army contract #DABT6395C0097 and is not necessarily representative of the positions or policies of the Army or the Government
A Collection of Selected Pseudorandom Number Generators with Linear Structures
, 1997
"... This is a collection of selected linear pseudorandom number that were implemented in commercial software, used in applications, and some of which have extensively been tested. The quality of these generators is examined using scatter plots and the spectral test. In addition, the spectral test is app ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
This is a collection of selected linear pseudorandom number that were implemented in commercial software, used in applications, and some of which have extensively been tested. The quality of these generators is examined using scatter plots and the spectral test. In addition, the spectral test is applied to study the applicability of linear congruential generators on parallel architectures. Additional Key Words and Phrases: Pseudorandom number generator, linear congruential generator, multiple recursive generator, combined pseudorandom number generators, parallel pseudorandom number generator, lattice structure, spectral test. 0 0.0001 0 0.0001 0 0.0001 0 0.0001 0 0.0001 Research supported by the Austrian Science Foundation (FWF), project no. P11143MAT. Contents 1 Linear congruential generator: LCG 5 1.1 LCG(2 31 ; 1103515245; 12345; 12345) ANSIC : : : : : : : : : : : : : : : : 5 1.2 LCG(2 31 \Gamma1; a = 7 5 = 16807; 0; 1) MINSTD : : : : : : : : : : : : : : : : 5 1.3 LCG...
Perspectives on the Evolution of Simulation
 Operations Research
, 2002
"... Simulation is introduced in terms of its different forms and uses, but the focus on discrete event modeling for systems analysis is dominant as it has been during the evolution of the technique within operations research and the management sciences. This evolutionary trace of over almost fifty years ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Simulation is introduced in terms of its different forms and uses, but the focus on discrete event modeling for systems analysis is dominant as it has been during the evolution of the technique within operations research and the management sciences. This evolutionary trace of over almost fifty years notes the importance of bidirectional influences with computer science, probability and statistics, and mathematics. No area within the scope of operations research and the management sciences has been affected more by advances in computing technology than simulation. This assertion is affirmed in the review of progress in those technical areas that collectively define the art and science of simulation. A holistic description of the field must include the roles of professional societies, conferences and symposia, and publications. The closing citation of a scientific value judgment from over 30 years in the past hopefully provides a stimulus for contemplating what lies ahead in the next 50 years.