Results 1  10
of
433
Modelchecking algorithms for continuoustime Markov chains
 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
, 2003
"... Continuoustime Markov chains (CTMCs) have been widely used to determine system performance and dependability characteristics. Their analysis most often concerns the computation of steadystate and transientstate probabilities. This paper introduces a branching temporal logic for expressing realt ..."
Abstract

Cited by 235 (48 self)
 Add to MetaCart
(Show Context)
Continuoustime Markov chains (CTMCs) have been widely used to determine system performance and dependability characteristics. Their analysis most often concerns the computation of steadystate and transientstate probabilities. This paper introduces a branching temporal logic for expressing realtime probabilistic properties on CTMCs and presents approximate model checking algorithms for this logic. The logic, an extension of the continuous stochastic logic CSL of Aziz et al., contains a timebounded until operator to express probabilistic timing properties over paths as well as an operator to express steadystate probabilities. We show that the model checking problem for this logic reduces to a system of linear equations (for unbounded until and the steadystate operator) and a Volterra integral equation system (for timebounded until). We then show that the problem of modelchecking timebounded until properties can be reduced to the problem of computing transient state probabilities for CTMCs. This allows the verification of probabilistic timing properties by efficient techniques for transient analysis for CTMCs such as uniformization. Finally, we show that a variant of lumping equivalence (bisimulation), a wellknown notion for aggregating CTMCs, preserves the validity of all formulas in the logic.
Liegroup methods
 ACTA NUMERICA
, 2000
"... Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having ..."
Abstract

Cited by 153 (24 self)
 Add to MetaCart
Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having introduced requisite elements of differential geometry, this paper surveys the novel theory of numerical integrators that respect Liegroup structure, highlighting theory, algorithmic issues and a number of applications.
Exponential integrators for large systems of differential equations,
 SIAM J. Sci. Comput.
, 1998
"... ..."
(Show Context)
The scaling and squaring method for the matrix exponential revisited
 SIAM REV
, 2009
"... The calculation of the matrix exponential e A maybeoneofthebestknownmatrix problems in numerical computation. It achieved folk status in our community from the paper by Moler and Van Loan, “Nineteen Dubious Ways to Compute the Exponential of a Matrix, ” published in this journal in 1978 (and revisit ..."
Abstract

Cited by 100 (20 self)
 Add to MetaCart
The calculation of the matrix exponential e A maybeoneofthebestknownmatrix problems in numerical computation. It achieved folk status in our community from the paper by Moler and Van Loan, “Nineteen Dubious Ways to Compute the Exponential of a Matrix, ” published in this journal in 1978 (and revisited in this journal in 2003). The matrix exponential is utilized in a wide variety of numerical methods for solving differential equations and many other areas. It is somewhat amazing given the long history and extensive study of the matrix exponential problem that one can improve upon the best existing methods in terms of both accuracy and efficiency, but that is what the SIGEST selection in this issue does. “The Scaling and Squaring Method for the Matrix Exponential Revisited ” by N. Higham, originally published in the SIAM Journal on Matrix Analysis and Applications in 2005, applies a new backward error analysis to the commonly used scaling and squaring method, as well as a new rounding error analysis of the Padé approximant of the scaled matrix. The analysis shows, and the accompanying experimental results verify, that a Padé approximant of a higher order than currently used actually results in a more accurate
A Combinatorial, PrimalDual approach to Semidefinite Programs
"... Semidefinite programs (SDP) have been used in many recent approximation algorithms. We develop a general primaldual approach to solve SDPs using a generalization of the wellknown multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced ..."
Abstract

Cited by 94 (10 self)
 Add to MetaCart
(Show Context)
Semidefinite programs (SDP) have been used in many recent approximation algorithms. We develop a general primaldual approach to solve SDPs using a generalization of the wellknown multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced Separator in undirected and directed weighted graphs, and the Min UnCut problem, this yields combinatorial approximation algorithms that are significantly more efficient than interior point methods. The design of our primaldual algorithms is guided by a robust analysis of rounding algorithms used to obtain integer solutions from fractional ones.
Optimization techniques on Riemannian manifolds
 FIELDS INSTITUTE COMMUNICATIONS
, 1994
"... The techniques and analysis presented in this paper provide new methods to solve optimization problems posed on Riemannian manifolds. A new point of view is offered for the solution of constrained optimization problems. Some classical optimization techniques on Euclidean space are generalized to Ri ..."
Abstract

Cited by 86 (1 self)
 Add to MetaCart
(Show Context)
The techniques and analysis presented in this paper provide new methods to solve optimization problems posed on Riemannian manifolds. A new point of view is offered for the solution of constrained optimization problems. Some classical optimization techniques on Euclidean space are generalized to Riemannian manifolds. Several algorithms are presented and their convergence properties are analyzed employing the Riemannian structure of the manifold. Specifically, two apparently new algorithms, which can be thought of as Newton’s method and the conjugate gradient method on Riemannian manifolds, are presented and shown to possess, respectively, quadratic and superlinear convergence. Examples of each method on certain Riemannian manifolds are given with the results of numerical experiments. Rayleigh’s quotient defined on the sphere is one example. It is shown that Newton’s method applied to this function converges cubically, and that the Rayleigh quotient iteration is an efficient approximation of Newton’s method. The Riemannian version of the conjugate gradient method applied to this function gives a new algorithm for finding the eigenvectors corresponding to the extreme eigenvalues of a symmetric matrix. Another example arises from extremizing the function tr ΘTQΘN on the special orthogonal group. In a similar example, it is shown that Newton’s method applied to the sum of the squares of the offdiagonal entries of a symmetric matrix converges cubically.
A Schur–Parlett algorithm for computing matrix functions
 SIAM J. Matrix Anal. Appl
"... Abstract. An algorithm for computing matrix functions is presented. It employs a Schur decomposition with reordering and blocking followed by the block form of a recurrence of Parlett, with functions of the nontrivial diagonal blocks evaluated via a Taylor series. A parameter is used to balance the ..."
Abstract

Cited by 75 (23 self)
 Add to MetaCart
(Show Context)
Abstract. An algorithm for computing matrix functions is presented. It employs a Schur decomposition with reordering and blocking followed by the block form of a recurrence of Parlett, with functions of the nontrivial diagonal blocks evaluated via a Taylor series. A parameter is used to balance the conflicting requirements of producing small diagonal blocks and keeping the separations of the blocks large. The algorithm is intended primarily for functions having a Taylor series with an infinite radius of convergence, but it can be adapted for certain other functions, such as the logarithm. Novel features introduced here include a convergence test that avoids premature termination of the Taylor series evaluation and an algorithm for reordering and blocking the Schur form. Numerical experiments show that the algorithm is competitive with existing specialpurpose algorithms for the matrix exponential, logarithm, and cosine. Nevertheless, the algorithm can be numerically unstable with the default choice of its blocking parameter (or in certain cases for all choices), and we explain why determining the optimal parameter appears to be a very difficult problem. A MATLAB implementation is available that is much more reliable than the function funm in MATLAB 6.5 (R13).
Efficient Solution Of Parabolic Equations By Krylov Approximation Methods
 SIAM J. Sci. Statist. Comput
, 1992
"... . In this paper we take a new look at numerical techniques for solving parabolic equations by the method of lines. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action ..."
Abstract

Cited by 71 (3 self)
 Add to MetaCart
. In this paper we take a new look at numerical techniques for solving parabolic equations by the method of lines. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of very small dimension to a known vector which is, in turn, computed accurately by exploiting highorder rational Chebyshev and Pad'e approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrixbyvector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Further parallelism is introduced by expanding the rational approximations into partial fractions. Some ...
A Survey of Condition Number Estimation for Triangular Matrices
 SIAM Review
, 1987
"... Abstract. We survey and compare a wide variety oftechniques for estimating the condition number of a triangular matrix, and make recommendations concerning the use of the estimates in applications. Each ofthe methods is shown to bound the condition number; the bounds can broadly be categorised as up ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We survey and compare a wide variety oftechniques for estimating the condition number of a triangular matrix, and make recommendations concerning the use of the estimates in applications. Each ofthe methods is shown to bound the condition number; the bounds can broadly be categorised as upper bounds from matrix theory and lower bounds from heuristic or probabilistic algorithms. For each bound we examine by how much, at worst, it can overestimate or underestimate the condition number. Numerical experiments are presented in order to illustrate and compare the practical performance ofthe condition estimators. Key words, matrix condition number, triangular matrix, LINPACK, QR decomposition, rank estimation AMS(MOS) subject classification. 65F35 1. Introduction. Let C m (m) denote the set of all m n matrices with complex (real) elements. Given a nonsingular matrix A C and a matrix norm I1 &quot; onC &quot; the condition number ofA with respect to inversion is defined by K(A) IIA IIAII.