Results 1  10
of
20
Performance and Reliability Analysis Using Directed Acyclic Graphs
 IEEE Trans. Software Eng
, 1987
"... AbstractA graphbased modeling technique has been developed for the stochastic analysis of systems containing concurrency. The basis of the technique is the use of directed acyclic graphs. These graphs represent eventprecedence networks where activities may occur serially, probabilistically, or co ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
AbstractA graphbased modeling technique has been developed for the stochastic analysis of systems containing concurrency. The basis of the technique is the use of directed acyclic graphs. These graphs represent eventprecedence networks where activities may occur serially, probabilistically, or concurrently. When a set of activities occurs concurrently, the condition for the set of activities to complete is that a specified number of the activities must complete. This includes the special cases that one or all of the activities must complete. The cumulative distribution function associated with an activity is assumed to have exponential polynomial form. Further generality is obtained by allowing these distributions to have a mass at the origin and/or at infinity. The distribution function for the time taken to complete the entire graph is computed symbolically in the time parameter t. The technique allows two or more graphs to be combined hierarchically. Applications of the technique to the evaluation of concurrent program execution time and to the reliability analysis of faulttolerant systems are discussed. Index TermsAvailability, directed acyclic graphs, faulttolerance, Markov models, performance evaluation, program performance, reliability. I.
Modeling impacts of process architecture on cost and schedule risk in product development
 IEEE Transactions on Engineering Management
, 2003
"... Abstract—To gain competitive leverage, firms that design and develop complex products seek to increase the efficiency and predictability of their development processes. Process improvement is facilitated by the development and use of models that account for and illuminate important characteristics o ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Abstract—To gain competitive leverage, firms that design and develop complex products seek to increase the efficiency and predictability of their development processes. Process improvement is facilitated by the development and use of models that account for and illuminate important characteristics of the process. Iteration is a fundamental but often unaddressed feature of product development (PD) processes. Its impact is mediated by the architecture of a process, i.e., its constituent activities and their interactions. This paper integrates several important characteristics of PD processes into a single model, highlighting the effects of varying process architecture. The PD process is modeled as a network of activities that exchange deliverables. Each activity has an uncertain duration and cost, an improvement curve, and risks of rework based on changes in its inputs. A work policy governs the timing of activity execution and deliverable exchange (and thus the amount of activity concurrency). The model is analyzed via simulation, which outputs sample cost and schedule outcome distributions. Varying the process architecture input varies the output distributions. Each distribution is used with a target and an impact function to determine a risk factor. Alternative process architectures are compared, revealing opportunities to trade cost and schedule risk. Example results and applications are shown for an industrial process, the preliminary design of an uninhabited combat aerial vehicle. The model yields and reinforces several managerial insights, including: how rework cascades through a PD process, trading off cost and schedule risk, interface criticality, and occasions for iterative overlapping. Index Terms—Activity network, budgeting, cycle time, design iteration, design structure matrix, engineering design management, process architecture, process modeling, process structure, product development, rework, risk management.
A Computational Study on Bounding the Makespan Distribution in Stochastic Project Networks
 ANNALS OF OPERATIONS RESEARCH
, 1998
"... Given a stochastic project network with independently distributed activity durations, several approaches to bound the distribution function of the project completion time have been proposed. We have implemented the most promising of these algorithms and compare their behavior on a basis of nearly 20 ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Given a stochastic project network with independently distributed activity durations, several approaches to bound the distribution function of the project completion time have been proposed. We have implemented the most promising of these algorithms and compare their behavior on a basis of nearly 2000 instances with up to 1200 activities of different testbeds. We propose a suitable numerical representation of the given distributions which is the basis for excellent computational results.
A heuristic for optimizing stochastic activity networks with applications to statistical digital circuit sizing
 IEEE Transactions on Circuits and SystemsI
, 2004
"... A deterministic activity network (DAN) is a collection of activities, each with some duration, along with a set of precedence constraints, which specify that activities begin only when certain others have finished. One critical performance measure for an activity network is its makespan, which is th ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
A deterministic activity network (DAN) is a collection of activities, each with some duration, along with a set of precedence constraints, which specify that activities begin only when certain others have finished. One critical performance measure for an activity network is its makespan, which is the minimum time required to complete all activities. In a stochastic activity network (SAN), the durations of the activities and the makespan are random variables. The analysis of SANs is quite involved, but can be carried out numerically by Monte Carlo analysis. This paper concerns the optimization of a SAN, i.e., the choice of some design variables that affect the probability distributions of the activity durations. We concentrate on the problem of minimizing a quantile (e.g., 95%) of the makespan, subject to constraints on the variables. This problem has many applications, ranging from project management to digital integrated circuit (IC) sizing (the latter being our motivation). While there are effective methods for optimizing DANs, the SAN optimization problem is much more difficult; the few existing methods cannot handle largescale problems.
A Survey on Solution Methods for Task Graph Models
, 1993
"... We give in this paper a survey on models developed in the literature using the concept of task graphs, focusing on solution techniques. Different types of task graphs are considered, from PERTS networks to random task graphs. Reviewed solution methods include exact computations and bounds. 1 Int ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We give in this paper a survey on models developed in the literature using the concept of task graphs, focusing on solution techniques. Different types of task graphs are considered, from PERTS networks to random task graphs. Reviewed solution methods include exact computations and bounds. 1 Introduction, Concepts and Notations The purpose of this paper is to survey models based on stochastic task graph representations and the solutions techniques that have been developed for them. The reason for doing this in the framework of the QMIPS project is that task graphs appear to be of central importance in the modeling and analysis of parallel programs and architectures. Yet, the solution of task graph problems is difficult in general. No really satisfactory and sufficiently general solutions have been proposed as of today, and research is still active in the area. The term "task graphs" covers now a wide variety of models. We shall begin the survey with what appears to be the initi...
Stochastic Graph Models for Performance Evaluation of Parallel Programs and the Evaluation Tool PEPP
 University of ErlangenNurnberg, IMMD 7, Internal Report 3/93
, 1993
"... For parallelizing an algorithm and for mapping a given program onto a parallel or distributed system there are generally many possibilities. Performance models can help to predict which implementation and which mapping is the best for a given algorithm and for a given computer configuration. Stochas ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
For parallelizing an algorithm and for mapping a given program onto a parallel or distributed system there are generally many possibilities. Performance models can help to predict which implementation and which mapping is the best for a given algorithm and for a given computer configuration. Stochastic graph modeling is an appropriate method, since the execution order of tasks, their runtime distribution, and branching probabilities are represented. In this paper a survey of the modeling possibilities and the analysis techniques implemented in our tool PEPP is presented. The analysis techniques include the seriesparallel reduction applied on the numerical representation of the tasks’s runtime, a new approximation method, and powerful bounding methods for the mean runtime. 1.
Task Graph Performance Bounds Through Comparison Methods
, 2001
"... When a parallel computation is represented in a formalism that imposes seriesparallel structure on its task graph, it becomes amenable to automated analysis and scheduling. Unfortunately, its execution time will usually also increase as precedence constraints are added to ensure seriesparallel str ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
When a parallel computation is represented in a formalism that imposes seriesparallel structure on its task graph, it becomes amenable to automated analysis and scheduling. Unfortunately, its execution time will usually also increase as precedence constraints are added to ensure seriesparallel structure. Bounding the slowdown ratio would allow an informed tradeoff between the benefits of a restrictive formalism and its cost in loss of performance. This dissertation deals with seriesparallelising task graphs by adding precedence constraints to a task graph, to make the resulting task graph seriesparallel. The weak bounded slowdown conjecture for seriesparallelising task graphs is introduced. This states that the slowdown is bounded if information about the workload can be used to guide the selection of which precedence constraints to add. A theory of best seriesparallelisations is developed to investigate this conjecture. Partial evidence is presented that the weak slowdown bound is likely to be 4/3, and this bound is shown to be tight.
A Comparison of Robustness Metrics for Scheduling DAGs on Heterogeneous Systems
 In HeteroPar’07
, 2007
"... Abstract — A schedule is said robust if it is able to absorb some degree of uncertainty in tasks duration while maintaining a stable solution. This intuitive notion of robustness has led to a lot of different interpretations and metrics. However, no comparison of these different metrics have ever be ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract — A schedule is said robust if it is able to absorb some degree of uncertainty in tasks duration while maintaining a stable solution. This intuitive notion of robustness has led to a lot of different interpretations and metrics. However, no comparison of these different metrics have ever been preformed. In this paper, we perform an experimental study of these different metrics and show how they are correlated to each other in the case of task scheduling, with dependencies between tasks. I.
Stochastic Modeling of Scaled Parallel Programs
 In Proceedings of the International Conference on Parallel and Distributed Systems
, 1994
"... Testing the performance scalability of parallel programs can be a time consuming task, involving many performance runs for different computer configurations, processor numbers, and problem sizes. Ideally, scalability issues would be addressed during parallel program design, but tools are not present ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Testing the performance scalability of parallel programs can be a time consuming task, involving many performance runs for different computer configurations, processor numbers, and problem sizes. Ideally, scalability issues would be addressed during parallel program design, but tools are not presently available that allow program developers to study the impact of algorithmicchoices under different problem and system scenarios. Hence, scalability analysis is often reserved to existing (and available) parallel machines as well as implemented algorithms. In this paper, we propose techniques for analyzing scaled parallel programs using stochastic modeling approaches. Although allowing more generality and flexibility in analysis, stochastic modeling of large parallel programs is difficult due to solution tractability problems. We observe, however, that the complexity of parallel program models depends significantly on the type of parallel computation, and we present several computation clas...
A Study of Approximating the Moments of the Job Completion Time in PERT Networks
, 1995
"... this paper. The project starts at the initial node and ends at the terminal node. A path is a set of nodes connected by arrows which begin at the initial node and end at the terminal node. This collection of arcs, nodes and paths is collectively called an activity network. A project is deemed comple ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
this paper. The project starts at the initial node and ends at the terminal node. A path is a set of nodes connected by arrows which begin at the initial node and end at the terminal node. This collection of arcs, nodes and paths is collectively called an activity network. A project is deemed complete if work along all paths is complete. After the development of the network, the next major planning step is the estimation of activity and project times. Typical methods for estimating activity times have been to use point estimates or some sort of range or distribution. The type of method used depends on the situation facing the project manager. Hershauer and Nabielsky (1972) categorize the situations into three major categories, viz., certainty, risk, and uncertainty. They further subdivide these categories based on availability of knowledge regarding the mode, range and distribution of the time estimates. They then map the situation and estimations to the appropriate methods to be adopted. If activity times are deterministic, the duration of the project completion time is determined by the length of the longest path in the network. However, this becomes complicated when activity times are stochastic in nature. We assume a scenario equivalent to Hershauer and Nabielsky's risk categorynamely, a common distribution situation. For a stochastic activity network, Kulkarni and Adlakha (1986) have identified three important measures of performance: (a) Distribution of the project completion time.