Results 1  10
of
12
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Processor Allocation and Checkpoint Interval Selection in Cluster Computing Systems
 Journal of Parallel and Distributed Computing
, 2001
"... Performance prediction of checkpointing systems in the presence of failures is a wellstudied research area. While the literature abounds with performance models of checkpointing systems, none address the issue of selecting runtime parameters other than the optimal checkpointing interval. In parti ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
Performance prediction of checkpointing systems in the presence of failures is a wellstudied research area. While the literature abounds with performance models of checkpointing systems, none address the issue of selecting runtime parameters other than the optimal checkpointing interval. In particular, the issue of processor allocation is typically ignored. In this paper, we present a performance model for longrunning parallel computations that execute with checkpointing enabled. We then discuss how it is relevant to today's parallel computing environments and software, and present case studies of using the model to select runtime parameters. Keywords: Checkpointing, performance prediction, parameter selection, parallel computation, Markov chain, exponential failure and repair distributions. 1
SmallWorld Overlay P2P Networks: Construction and Handling Dynamic Flash Crowd
"... In this paper, we consider how to "construct" and "maintain" an overlay structured P2P network based on the "smallworld paradigm". Two main attractive properties of a smallworld network are (1) low average hop distance between any two randomly chosen nodes and, (2) high clustering coefficient. A ne ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In this paper, we consider how to "construct" and "maintain" an overlay structured P2P network based on the "smallworld paradigm". Two main attractive properties of a smallworld network are (1) low average hop distance between any two randomly chosen nodes and, (2) high clustering coefficient. A network with a low average hop distance implies a small latency for object lookup. While a network with a high clustering coefficient implies the underlying P2P network has the "potential" to provide object lookup service even in the midst of heavy object traffic loading, for example, under a flash crowd scenario. In this paper, we present the SWOP protocol of constructing a smallworld overlay P2P network. We compare our result with other structured P2P networks such as the Chord protocol. Although the Chord protocol can provide object lookup with latency of 34834 complexity, where is the number of nodes in a P2P network, we show that the SWOP protocol can further improve the object lookup performance. We also take advantage of the high clustering coefficient of a smallworld P2P network and propose an object replication algorithm to handle the heavy object traffic loading situation, e.g. under the dynamic flash crowd scenario. We show that the SWOP network can quickly and efficiently deliver the "popular" and "dynamic" object to all requested nodes. Based on our knowledge, this is the first work that addresses how to handle the "dynamic" flash crowd scenario on a structured P2P network.
Small World Overlay P2P Networks
 PROC. TWELFTH IEEE INTERNATIONAL WORKSHOP ON QUALITY OF SERVICE
, 2004
"... We consider the problem of how to construct and maintain an overlay structured P2P network based on the small world paradigm. Two main attractive properties of a small world network are (1) low average hop distance between any two randomly chosen nodes, and (2) high clustering coefficient of nodes. ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We consider the problem of how to construct and maintain an overlay structured P2P network based on the small world paradigm. Two main attractive properties of a small world network are (1) low average hop distance between any two randomly chosen nodes, and (2) high clustering coefficient of nodes. Having a low average hop distance implies a low latency for object lookup, while having a high clustering coefficient implies the underlying network can effectively provide object lookup even under heavy demands (for example, in a flash crowd scenario). In this paper, we present a small world overlay protocol (SWOP) for constructing a small world overlay P2P network. We compare the performance of our system with that of other structured P2P networks such as Chord. We show that the SWOP protocol can achieve improved object lookup performance over the existing protocols. We also exploit the high clustering coefficient of a SWOP network to design an object replication algorithm that can effectively handle heavy object lookup traffic. As a result, a SWOP network can quickly and efficiently deliver popular and dynamic objects to a large number of requesting nodes. To the best of our knowledge, ours is the first piece of work that addresses how to handle dynamic flash crowds in a structured P2P network environment.
The average availability of parallel checkpointing systems and its importance in selecting runtime parameters
 IN 29TH INTERNATIONAL SYMPOSIUM ON FAULTTOLERANT COMPUTING
, 1999
"... Performance prediction of checkpointing systems in the presence of failures is a wellstudied research area. While the literature abounds with performance models of checkpointing systems, none address the issue of selecting runtime parameters other than the optimal checkpointing interval. In particu ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Performance prediction of checkpointing systems in the presence of failures is a wellstudied research area. While the literature abounds with performance models of checkpointing systems, none address the issue of selecting runtime parameters other than the optimal checkpointing interval. In particular, the issue of processor allocation is typically ignored. In this paper, we briefly present a performance model for longrunning parallel computations that execute with checkpointing enabled. We then discuss how it is relevant to today’s parallel computing environments and software, and present case studies of using the model to select runtime parameters.
The Average Availability of Uniprocessor Checkpointing Systems, Revisited
, 1998
"... Performance prediction of checkpointing systems in the presence of failures is a wellstudied research area. This paper makes three small contributions to this research area. First, we show how to apply the concept of availability from reliability theory as a useful metric for checkpointing systems. ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Performance prediction of checkpointing systems in the presence of failures is a wellstudied research area. This paper makes three small contributions to this research area. First, we show how to apply the concept of availability from reliability theory as a useful metric for checkpointing systems. Second, we study the average availability of uniprocessor checkpointing systems, using the libckpt checkpointer as a model. This is a slight deviation from previous checkpointing models. We employ Bernoulli trials to derive an expression for the availability of such a checkpointing system, and then use this expression to calculate the checkpoint interval which maximizes availability. Third, we present another derivation of the availability based on a direct calculation of average segment uptime. For the exponential failure distribution function, these two derivations are equivalent. The latter derivation allows for a simple way to numerically approximate availability for other failure distr...
The Average Availability of Multiprocessor Checkpointing Systems
, 1998
"... Performance prediction of checkpointing systems in the presence of failures is a wellstudied research area. The average availability is defined as a useful metric for uniprocessor checkpointing systems in a previous Technical Report [PT98]. This report introduces a discreteparameter, finitestate ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Performance prediction of checkpointing systems in the presence of failures is a wellstudied research area. The average availability is defined as a useful metric for uniprocessor checkpointing systems in a previous Technical Report [PT98]. This report introduces a discreteparameter, finitestate Markov chain M to compute the availability for multiprocessor checkpointing systems. N is the number of processors in the system. Processors are interchangeable. At any time, each individual processor is either nonfunctional (failed and under repair) or functional (actively working on the task or standingby as a spare). A specified minimum number a of the N processors must be functional in order for the system to work on a distributed task. The system does not use more than a processors but cannot compute with fewer than a. M is based on assumptions of independent exponential probability distributions for identically distributed interoccurrence times of failures and for identically distribu...
Forecasting the Spatial Dynamics of Gypsy Moth Outbreaks Using Cellular Transition Models
, 1995
"... A series of cellular transition probability models that predict the spatial dynamics of gypsy moth (Lymantria dispar L.) defoliation were developed. The models consisted of four classes: Simple Markov chains, Rook's and Queen's move neighborhood models, and distance weighted neighborhood models. His ..."
Abstract
 Add to MetaCart
A series of cellular transition probability models that predict the spatial dynamics of gypsy moth (Lymantria dispar L.) defoliation were developed. The models consisted of four classes: Simple Markov chains, Rook's and Queen's move neighborhood models, and distance weighted neighborhood models. Historical maps of gypsy moth defoliation across Massachusetts from 1961 to 1991 were digitized into a binary raster matrix and used to estimate transition probabilities. Results indicated that the distance weighted neighborhood model performed better then the other neighborhood and the simple Markov chain. Incorporation of interpolated counts of overwintering egg mass counts taken throughout the state and incorporation of historical defoliation frequencies increased the performance of the transition models.
Bidirectional classical stochastic processes with measurements and feedback
, 2006
"... A measurement on a quantum system is said to cause the “collapse” of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there ..."
Abstract
 Add to MetaCart
A measurement on a quantum system is said to cause the “collapse” of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a simplified model for quantummechanical processes that occur along a spacelike reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the mathematics of those stochastic processes describable with linear, singlestep, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but nonMarkov, values for the ensuing modification of the system’s output probability distribution. The paper concludes with an analysis of a simple classical probabilistic version of a socalled grandfather paradox. 1