Results 1  10
of
1,022
The PARSEC benchmark suite: Characterization and architectural implications
 IN PRINCETON UNIVERSITY
, 2008
"... This paper presents and characterizes the Princeton Application Repository for SharedMemory Computers (PARSEC), a benchmark suite for studies of ChipMultiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on highperformance computing applications and used a limited ..."
Abstract

Cited by 486 (3 self)
 Add to MetaCart
(Show Context)
This paper presents and characterizes the Princeton Application Repository for SharedMemory Computers (PARSEC), a benchmark suite for studies of ChipMultiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on highperformance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic largescale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and offchip traffic. The benchmark suite has been made available to the public.
The Node Distribution of the Random Waypoint Mobility Model for Wireless Ad Hoc Networks
, 2003
"... The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in general, nonuniform. However, a closedform expression of this distribution and an indepth investigation ..."
Abstract

Cited by 372 (11 self)
 Add to MetaCart
(Show Context)
The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in general, nonuniform. However, a closedform expression of this distribution and an indepth investigation is still missing. This fact impairs the accuracy of the current simulation methodology of ad hoc networks and makes it impossible to relate simulationbased performance results to corresponding analytical results. To overcome these problems, we present a detailed analytical study of the spatial node distribution generated by random waypoint mobility. More specifically, we consider a generalization of the model in which the pause time of the mobile nodes is chosen arbitrarily in each waypoint and a fraction of nodes may remain static for the entire simulation time. We show that the structure of the resulting distribution is the weighted sum of three independent components: the static, pause, and mobility component. This division enables us to understand how the models parameters influence the distribution. We derive an exact equation of the asymptotically stationary distribution for movement on a line segment and an accurate approximation for a square area. The good quality of this approximation is validated through simulations using various settings of the mobility parameters. In summary, this article gives a fundamental understanding of the behavior of the random waypoint model.
On the Minimum Node Degree and Connectivity of a Wireless Multihop Network
 ACM MobiHoc
, 2002
"... This paper investigates two fundamental characteristics of a wireless multihop network: its minimum node degree and its k–connectivity. Both topology attributes depend on the spatial distribution of the nodes and their transmission range. Using typical modeling assumptions — a random uniform distri ..."
Abstract

Cited by 321 (4 self)
 Add to MetaCart
This paper investigates two fundamental characteristics of a wireless multihop network: its minimum node degree and its k–connectivity. Both topology attributes depend on the spatial distribution of the nodes and their transmission range. Using typical modeling assumptions — a random uniform distribution of the nodes and a simple link model — we derive an analytical expression that enables the determination of the required range r0 that creates, for a given node density ρ, an almost surely k–connected network. Equivalently, if the maximum r0 of the nodes is given, we can find out how many nodes are needed to cover a certain area with a k–connected network. We also investigate these questions by various simulations and thereby verify our analytical expressions. Finally, the impact of mobility is discussed. The results of this paper are of practical value for researchers in this area, e.g., if they set the parameters in a network–level simulation of a mobile ad hoc network or if they design a wireless sensor network. Categories and Subject Descriptors C.2 [Computercommunication networks]: Network architecture and design—wireless communication, network communications, network topology; G.2.2 [Discrete mathematics]: Graph theory; F.2.2 [Probability and statistics]: Stochastic processes
COPASI  a COmplex PAthway SImulator
 BIOINFORMATICS
, 2006
"... Motivation: Simulation and modeling is becoming a standard approach to understand complex biochemical processes. Therefore, there is a big need for software tools that allow access to diverse simulation and modeling methods as well as support for the usage of these methods. Results: Here, we present ..."
Abstract

Cited by 256 (6 self)
 Add to MetaCart
Motivation: Simulation and modeling is becoming a standard approach to understand complex biochemical processes. Therefore, there is a big need for software tools that allow access to diverse simulation and modeling methods as well as support for the usage of these methods. Results: Here, we present COPASI, a platformindependent and userfriendly biochemical simulator that offers several unique features. We discuss numerical issues with these features, in particular the criteria to switch between stochastic and deterministic simulation methods, hybrid deterministicstochastic methods, and the importance of random number generator numerical resolution in stochastic simulation. Availability: The complete software is available in binary (executable) for MS Windows, OS X, Linux (Intel), and Sun Solaris (SPARC), as well as the full source code under an open source license from
Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals
, 2009
"... Wideband analog signals push contemporary analogtodigital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, alt ..."
Abstract

Cited by 158 (18 self)
 Add to MetaCart
Wideband analog signals push contemporary analogtodigital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.
On credibility of simulation studies of telecommunication networks
 IEEE Communications Magazine
, 2002
"... In telecommunication networks, as in many other areas of science and engineering, proliferation of computers as research tools has resulted in the adoption of computer simulation as the most commonly used paradigm of scientific investigations. This, together with a plethora of existing simulation la ..."
Abstract

Cited by 154 (4 self)
 Add to MetaCart
(Show Context)
In telecommunication networks, as in many other areas of science and engineering, proliferation of computers as research tools has resulted in the adoption of computer simulation as the most commonly used paradigm of scientific investigations. This, together with a plethora of existing simulation languages and packages, has created a popular opinion that simulation is mainly an exercise in computer programming. In new computing environments, programming can be minimised, or even fully replaced, by the manipulation of icons (representing prebuilt programming objects containing basic functional blocks of simulated systems) on a computer monitor. One can say that we have witnessed another success of modern science and technology: the emergence of wonderful and powerful tools for exploring and predicting the behaviour of such complex, stochastic dynamic systems as telecommunication networks. But this enthusiasm is not shared by all researchers in this area. An opinion is spreading that one cannot rely on the majority of the published results on performance evaluation studies of telecommunication networks based on stochastic simulation, since they lack credibility. Indeed, the spread of this phenomenon is so wide that one can speak about a deep crisis of credibility. In this paper, this claim is supported by the results of a survey of over 2200 publications on telecommunication
Exact Simulation of Stochastic Volatility and other
 Affine Jump Diffusion Processes, Working Paper
, 2004
"... The stochastic differential equations for affine jump diffusion models do not yield exact solutions that can be directly simulated. Discretization methods can be used for simulating security prices under these models. However, discretization introduces bias into the simulation results and a large nu ..."
Abstract

Cited by 121 (1 self)
 Add to MetaCart
(Show Context)
The stochastic differential equations for affine jump diffusion models do not yield exact solutions that can be directly simulated. Discretization methods can be used for simulating security prices under these models. However, discretization introduces bias into the simulation results and a large number of time steps may be needed to reduce the discretization bias to an acceptable level. This paper suggests a method for the exact simulation of the stock price and variance under Heston’s stochastic volatility model and other affine jump diffusion processes. The sample stock price and variance from the exact distribution can then be used to generate an unbiased estimator of the price of a derivative security. We compare our method with the more conventional Euler discretization method and demonstrate the faster convergence rate of the error in our method. Specifically, our method achieves an O(s− 1 2) convergence rate, where s is the total computational budget. The convergence rate for the Euler discretization method is O(s− 1 3) or slower, depending on the model coefficients and option payoff function. Subject Classifications: Simulation, efficiency: exact methods. Finance, asset pricing: computational methods. Acknowledgement: This paper was presented at seminars at Columbia University, the sixth Monte
A Comparison of Statistical Significance Tests for Information Retrieval Evaluation
, 2007
"... Information retrieval (IR) researchers commonly use three tests of statistical significance: the Student’s paired ttest, the Wilcoxon signed rank test, and the sign test. Other researchers have previously proposed using both the bootstrap and Fisher’s randomization (permutation) test as nonparametr ..."
Abstract

Cited by 114 (10 self)
 Add to MetaCart
(Show Context)
Information retrieval (IR) researchers commonly use three tests of statistical significance: the Student’s paired ttest, the Wilcoxon signed rank test, and the sign test. Other researchers have previously proposed using both the bootstrap and Fisher’s randomization (permutation) test as nonparametric significance tests for IR but these tests have seen little use. For each of these five tests, we took the adhoc retrieval runs submitted to TRECs 3 and 58, and for each pair of runs, we measured the statistical significance of the difference in their mean average precision. We discovered that there is little practical difference between the randomization, bootstrap, and t tests. Both the Wilcoxon and sign test have a poor ability to detect significance and have the potential to lead to false detections of significance. The Wilcoxon and sign tests are simplified variants of the randomization test and their use should be discontinued for measuring the significance of a difference between means.
A genetic algorithm for the weight setting problem in OSPF routing
 Journal of Combinatorial Optimization
, 2002
"... Abstract. With the growth of the Internet, Internet Service Providers (ISPs) try to meet the increasing traffic demand with new technology and improved utilization of existing resources. Routing of data packets can affect network utilization. Packets are sent along network paths from source to desti ..."
Abstract

Cited by 108 (27 self)
 Add to MetaCart
(Show Context)
Abstract. With the growth of the Internet, Internet Service Providers (ISPs) try to meet the increasing traffic demand with new technology and improved utilization of existing resources. Routing of data packets can affect network utilization. Packets are sent along network paths from source to destination following a protocol. Open Shortest Path First (OSPF) is the most commonly used intradomain Internet routing protocol (IRP). Traffic flow is routed along shortest paths, splitting flow at nodes with several outgoing links on a shortest path to the destination IP address. Link weights are assigned by the network operator. A path length is the sum of the weights of the links in the path. The OSPF weight setting (OSPFWS) problem seeks a set of weights that optimizes network performance. We study the problem of optimizing OSPF weights, given a set of projected demands, with the objective of minimizing network congestion. The weight assignment problem is NPhard. We present a genetic algorithm (GA) to solve the OSPFWS problem. We compare our results with the best known and commonly used heuristics for OSPF weight setting, as well as with a lower bound of the optimal multicommodity flow routing, which is a linear programming relaxation of the OSPFWS problem. Computational experiments are made on the AT&T Worldnet backbone with projected demands, and on twelve instances of synthetic networks. 1.