Results 1  10
of
184
A direct formulation for sparse pca using semidefinite programming
 In NIPS 17
, 2004
"... Abstract. Given a covariance matrix, we consider the problem of maximizing the variance explained by a particular linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This problem arises in the decomposition of a covariance matrix into ..."
Abstract

Cited by 166 (29 self)
 Add to MetaCart
Abstract. Given a covariance matrix, we consider the problem of maximizing the variance explained by a particular linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This problem arises in the decomposition of a covariance matrix into sparse factors or sparse principal component analysis (PCA), and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming–based relaxation for our problem. We also discuss Nesterov’s smooth minimization technique applied to the semidefinite program arising in the semidefinite relaxation of the sparse PCA problem. The method has complexity O(n 4 √ log(n)/ɛ), where n is the size of the underlying covariance matrix and ɛ is the desired absolute accuracy on the optimal value of the problem.
Modelchecking algorithms for continuoustime Markov chains
 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
, 2003
"... Continuoustime Markov chains (CTMCs) have been widely used to determine system performance and dependability characteristics. Their analysis most often concerns the computation of steadystate and transientstate probabilities. This paper introduces a branching temporal logic for expressing realt ..."
Abstract

Cited by 128 (26 self)
 Add to MetaCart
Continuoustime Markov chains (CTMCs) have been widely used to determine system performance and dependability characteristics. Their analysis most often concerns the computation of steadystate and transientstate probabilities. This paper introduces a branching temporal logic for expressing realtime probabilistic properties on CTMCs and presents approximate model checking algorithms for this logic. The logic, an extension of the continuous stochastic logic CSL of Aziz et al., contains a timebounded until operator to express probabilistic timing properties over paths as well as an operator to express steadystate probabilities. We show that the model checking problem for this logic reduces to a system of linear equations (for unbounded until and the steadystate operator) and a Volterra integral equation system (for timebounded until). We then show that the problem of modelchecking timebounded until properties can be reduced to the problem of computing transient state probabilities for CTMCs. This allows the verification of probabilistic timing properties by efficient techniques for transient analysis for CTMCs such as uniformization. Finally, we show that a variant of lumping equivalence (bisimulation), a wellknown notion for aggregating CTMCs, preserves the validity of all formulas in the logic.
Liegroup methods
 ACTA NUMERICA
, 2000
"... Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having ..."
Abstract

Cited by 92 (18 self)
 Add to MetaCart
Many differential equations of practical interest evolve on Lie groups or on manifolds acted upon by Lie groups. The retention of Liegroup structure under discretization is often vital in the recovery of qualitatively correct geometry and dynamics and in the minimization of numerical error. Having introduced requisite elements of differential geometry, this paper surveys the novel theory of numerical integrators that respect Liegroup structure, highlighting theory, algorithmic issues and a number of applications.
Exponential Integrators For Large Systems Of Differential Equations
 SIAM J. Sci. Comput
, 1997
"... . We study the numerical integration of large stiff systems of differential equations by methods that use matrixvector products with the exponential or a related function of the Jacobian. For large problems, these can be approximated by Krylov subspace methods, which typically converge faster than ..."
Abstract

Cited by 87 (1 self)
 Add to MetaCart
. We study the numerical integration of large stiff systems of differential equations by methods that use matrixvector products with the exponential or a related function of the Jacobian. For large problems, these can be approximated by Krylov subspace methods, which typically converge faster than those for the solution of the linear systems arising in standard stiff integrators. The exponential methods also offer favorable properties in the integration of differential equations whose Jacobian has large imaginary eigenvalues. We derive methods up to order 4 which are exact for linear constantcoefficient equations. The implementation of the methods is discussed. Numerical experiments with reactiondiffusion problems and a timedependent Schrodinger equation are included. Key words. Numerical integrator, highdimensional differential equations, matrix exponential, Krylov subspace methods. AMS(MOS) subject classifications. 65L05, 65M15, 65F10. 1. Introduction. The idea to use the exp...
A Combinatorial, PrimalDual approach to Semidefinite Programs
"... Semidefinite programs (SDP) have been used in many recent approximation algorithms. We develop a general primaldual approach to solve SDPs using a generalization of the wellknown multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced S ..."
Abstract

Cited by 63 (10 self)
 Add to MetaCart
Semidefinite programs (SDP) have been used in many recent approximation algorithms. We develop a general primaldual approach to solve SDPs using a generalization of the wellknown multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced Separator in undirected and directed weighted graphs, and the Min UnCut problem, this yields combinatorial approximation algorithms that are significantly more efficient than interior point methods. The design of our primaldual algorithms is guided by a robust analysis of rounding algorithms used to obtain integer solutions from fractional ones.
A Survey of Condition Number Estimation for Triangular Matrices
 SIAM Review
, 1987
"... Abstract. We survey and compare a wide variety oftechniques for estimating the condition number of a triangular matrix, and make recommendations concerning the use of the estimates in applications. Each ofthe methods is shown to bound the condition number; the bounds can broadly be categorised as up ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
Abstract. We survey and compare a wide variety oftechniques for estimating the condition number of a triangular matrix, and make recommendations concerning the use of the estimates in applications. Each ofthe methods is shown to bound the condition number; the bounds can broadly be categorised as upper bounds from matrix theory and lower bounds from heuristic or probabilistic algorithms. For each bound we examine by how much, at worst, it can overestimate or underestimate the condition number. Numerical experiments are presented in order to illustrate and compare the practical performance ofthe condition estimators. Key words, matrix condition number, triangular matrix, LINPACK, QR decomposition, rank estimation AMS(MOS) subject classification. 65F35 1. Introduction. Let C m (m) denote the set of all m n matrices with complex (real) elements. Given a nonsingular matrix A C and a matrix norm I1 " onC " the condition number ofA with respect to inversion is defined by K(A) IIA IIAII.
Efficient Solution Of Parabolic Equations By Krylov Approximation Methods
 SIAM J. Sci. Statist. Comput
, 1992
"... . In this paper we take a new look at numerical techniques for solving parabolic equations by the method of lines. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
. In this paper we take a new look at numerical techniques for solving parabolic equations by the method of lines. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of very small dimension to a known vector which is, in turn, computed accurately by exploiting highorder rational Chebyshev and Pad'e approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrixbyvector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Further parallelism is introduced by expanding the rational approximations into partial fractions. Some ...
A SchurParlett Algorithm for Computing Matrix Functions
 SIAM J. MATRIX ANAL. APPL
, 2003
"... An algorithm for computing matrix functionsispresented. It employsaS chur decomposition with reordering and blocking followed by the block form of a recurrence of Parlett, with functionsof the nontrivial diagonal blocksevaluated via a Taylor series. A parameter isused to balance the conflicting re ..."
Abstract

Cited by 45 (15 self)
 Add to MetaCart
An algorithm for computing matrix functionsispresented. It employsaS chur decomposition with reordering and blocking followed by the block form of a recurrence of Parlett, with functionsof the nontrivial diagonal blocksevaluated via a Taylor series. A parameter isused to balance the conflicting requirementsof producing small diagonal blocksand keeping the separations of the blockslarge. The algorithm isintended primarily for functionshaving a Taylor serieswith an infinite radius of convergence, but it can be adapted for certain other functions, such as the logarithm. Novel featuresintroduced here include a convergence test that avoidspremature termination of the Taylor series evaluation and an algorithm for reordering and blocking theS hur form. Numerical experiments show that the algorithm is competitive with existing specialpurpose algorithms for the matrix exponential, logarithm, and cosine. Nevertheless, the algorithm can be numerically unstable with the default choice of its blocking parameter (or in certain cases for all choices), and we explain why determining the optimal parameter appearsto be a very di#cult problem. A MATLAB implementation isavailable that ismuch more reliable than the function funm in MATLAB 6.5 (R13).
The scaling and squaring method for the matrix exponential revisited
 SIAM REV
, 2009
"... The calculation of the matrix exponential e A maybeoneofthebestknownmatrix problems in numerical computation. It achieved folk status in our community from the paper by Moler and Van Loan, “Nineteen Dubious Ways to Compute the Exponential of a Matrix, ” published in this journal in 1978 (and revisit ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
The calculation of the matrix exponential e A maybeoneofthebestknownmatrix problems in numerical computation. It achieved folk status in our community from the paper by Moler and Van Loan, “Nineteen Dubious Ways to Compute the Exponential of a Matrix, ” published in this journal in 1978 (and revisited in this journal in 2003). The matrix exponential is utilized in a wide variety of numerical methods for solving differential equations and many other areas. It is somewhat amazing given the long history and extensive study of the matrix exponential problem that one can improve upon the best existing methods in terms of both accuracy and efficiency, but that is what the SIGEST selection in this issue does. “The Scaling and Squaring Method for the Matrix Exponential Revisited ” by N. Higham, originally published in the SIAM Journal on Matrix Analysis and Applications in 2005, applies a new backward error analysis to the commonly used scaling and squaring method, as well as a new rounding error analysis of the Padé approximant of the scaled matrix. The analysis shows, and the accompanying experimental results verify, that a Padé approximant of a higher order than currently used actually results in a more accurate
Approximating the logarithm of a matrix to specified accuracy
 SIAM J. Matrix Anal. Appl
, 2001
"... Abstract. The standard inverse scaling and squaring algorithm for computing the matrix logarithm begins by transforming the matrix to Schur triangular form in order to facilitate subsequent matrix square root and Padé approximation computations. A transformationfree form of this method that exploit ..."
Abstract

Cited by 34 (17 self)
 Add to MetaCart
Abstract. The standard inverse scaling and squaring algorithm for computing the matrix logarithm begins by transforming the matrix to Schur triangular form in order to facilitate subsequent matrix square root and Padé approximation computations. A transformationfree form of this method that exploits incomplete Denman–Beavers square root iterations and aims for a specified accuracy (ignoring roundoff) is presented. The error introduced by using approximate square roots is accounted for by a novel splitting lemma for logarithms of matrix products. The number of square root stages and the degree of the finalPadé approximation are chosen to minimize the computationalwork. This new method is attractive for highperformance computation since it uses only the basic building blocks of matrix multiplication, LU factorization and matrix inversion.