Results 1  10
of
63
Pseudospectra of linear operators
 SIAM Rev
, 1997
"... Abstract. If a matrix or linear operator A is far from normal, its eigenvalues or, more generally, its spectrum may have little to do with its behavior as measured by quantities such as ‖An ‖ or ‖exp(tA)‖. More may be learned by examining the sets in the complex plane known as the pseudospectra of A ..."
Abstract

Cited by 154 (10 self)
 Add to MetaCart
(Show Context)
Abstract. If a matrix or linear operator A is far from normal, its eigenvalues or, more generally, its spectrum may have little to do with its behavior as measured by quantities such as ‖An ‖ or ‖exp(tA)‖. More may be learned by examining the sets in the complex plane known as the pseudospectra of A, defined by level curves of the norm of the resolvent, ‖(zI − A) −1‖. Five years ago, the author published a paper that presented computed pseudospectra of thirteen highly nonnormal matrices arising in various applications. Since that time, analogous computations have been carried out for differential and integral operators. This paper, a companion to the earlier one, presents ten examples, each chosen to illustrate one or more mathematical or physical principles.
Wavelet Processes and Adaptive Estimation of the Evolutionary Wavelet Spectrum
, 1998
"... This article defines and studies a new class of nonstationary random processes constructed from discrete nondecimated wavelets which generalizes the Cramer (Fourier) representation of stationary time series. We define an evolutionary wavelet spectrum (EWS) which quantifies how process power va ..."
Abstract

Cited by 73 (28 self)
 Add to MetaCart
This article defines and studies a new class of nonstationary random processes constructed from discrete nondecimated wavelets which generalizes the Cramer (Fourier) representation of stationary time series. We define an evolutionary wavelet spectrum (EWS) which quantifies how process power varies locally over time and scale. We show how the EWS may be rigorously estimated by a smoothed wavelet periodogram and how both these quantities may be inverted to provide an estimable timelocalized autocovariance. We illustrate our theory with a pedagogical example based on discrete nondecimated Haar wavelets and also a real medical time series example.
Algorithms for Intersecting Parametric and Algebraic Curves I: Simple Intersections
 ACM Transactions on Graphics
, 1995
"... : The problem of computing the intersection of parametric and algebraic curves arises in many applications of computer graphics and geometric and solid modeling. Previous algorithms are based on techniques from elimination theory or subdivision and iteration. The former is however, restricted to low ..."
Abstract

Cited by 71 (19 self)
 Add to MetaCart
: The problem of computing the intersection of parametric and algebraic curves arises in many applications of computer graphics and geometric and solid modeling. Previous algorithms are based on techniques from elimination theory or subdivision and iteration. The former is however, restricted to low degree curves. This is mainly due to issues of efficiency and numerical stability. In this paper we use elimination theory and express the resultant of the equations of intersection as a matrix determinant. The matrix itself rather than its symbolic determinant, a polynomial, is used as the representation. The problem of intersection is reduced to computing the eigenvalues and eigenvectors of a numeric matrix. The main advantage of this approach lies in its efficiency and robustness. Moreover, the numerical accuracy of these operations is well understood. For almost all cases we are able to compute accurate answers in 64 bit IEEE floating point arithmetic. Keywords: Intersection, curves, a...
Analysis of the parareal timeparallel timeintegration method
 SIAM J. Sci. Comput
"... æHçèæHçéäxæHêëæ êØìUí çéîðïéã§ñ1îºòKæQñMä<ï¶ã„í ólïéí‘ò¶í êõô äï¶îõñMä6óöä:åiä: ÷ óöä:÷%ï8å§çéí ø§ù âQã§ä)å æHçèæHêõêØäxê8îõ ÷ ï¶îõñGäUúQîðï æHå§å§çéí¤ûöîØñCæHï¶äxò*å„æHç¶ïéò‘íHüÆï¶ã§äGòRíUêØý§ï¶îõí ÷êºæëï¶äxç‘îõ÷ êõä:ñCò*å ïéîØñMä_ò¶îØñþý§êØïéæH÷„ä:í ý òRêõÿ'ïéíÌå„æ çRïèò]íHüÆï¶ã„äþò¶í êõýö ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
æHçèæHçéäxæHêëæ êØìUí çéîðïéã§ñ1îºòKæQñMä<ï¶ã„í ólïéí‘ò¶í êõô äï¶îõñMä6óöä:åiä: ÷ óöä:÷%ï8å§çéí ø§ù âQã§ä)å æHçèæHêõêØäxê8îõ ÷ ï¶îõñGäUúQîðï æHå§å§çéí¤ûöîØñCæHï¶äxò*å„æHç¶ïéò‘íHüÆï¶ã§äGòRíUêØý§ï¶îõí ÷êºæëï¶äxç‘îõ÷ êõä:ñCò*å ïéîØñMä_ò¶îØñþý§êØïéæH÷„ä:í ý òRêõÿ'ïéíÌå„æ çRïèò]íHüÆï¶ã„äþò¶í êõýöï¶îõí ÷ äxæ ç¶êõîõä:ç*îõ ÷ ï¶îõñMä …÷ïéã§îõò å„æ å äxç6ïéã§älç¶äxêõæHï¶îõí ÷Ìí üOï¶ã§älå„æHçèæHçéäxæ ê„æHêõì íUç¶îØï¶ã„ñ–ïéíþò¶å„æ:ä<ù€ï¶îõñMä‘ñ_ý„êðïéîØìUç¶îºó æ ÷„óñ_ý„êðïéîØå§êõäMòRã§í í ï¶îõ÷§ì'ñMä<ïéã§íöó§òlîõò „çèòzï ø§çéîØä „ÿóöîºò:ý„ò¶ò¶äxó lâQã§äþü í <ý„ò í ü‘ï¶ã„äå„æHåiä:çÌîºòMí ÷÷§ä:í ÷ ô äxç¶ìUä: ÷:ä'çéäxò¶ý§êðïèòGï¶ã„æHï'òRã„í ò¶ý§å äxç¶êõîõ÷§äxæ ç:í ÷ ô äxç¶ìUä: ÷:ä*íHüï¶ã§ä æ êØìUí çéîðïéã§ñ]ã§ä:÷/ý„ò¶äxó/í ÷"ø íUý§÷„óöäJó"ï¶îõñGä îõ÷%ï¶äxç¶ôëæHêºò æ ÷„ó'êõîõ÷§äxæ ç <í ÷ ôUä:çéì ä: ÷ <ä*ü íUç]ý§ ÷ ø íUý§÷„óöäJó/îØ÷%ï¶äxç¶ôëæ êõò Ý ï¶îõñMä<ù†å„æHçèæHêõêõä:ê¤ï¶îõñMä<ù†îØ÷%ïéä:ì çèæëïéîØíU ÷ Jå„æHçèæHçéäxæ ê!:í ÷ ô äxç¶ìUä: ÷:äÒæ ÷„æHêõÿöòRîºò JòRã„í%í ïRù ß §ñ_ý„êðïéîØìUç¶îºó §óöä:ü ä:çéç¶äJó":í çéç¶ä#°ï¶îõí ÷ îØ÷§ì
A hybrid GMRES algorithm for nonsymmetric linear systems
 SIAM J. Matrix Anal. Appl
, 1992
"... Abstract. A new hybrid iterative algorithm is proposed for solving large nonsymmetric systems of linear equations. Unlike other hybrid algorithms, which first estimate eigenvalues and then apply this knowledge in further iterations, this algorithm avoids eigenvalue estimates. Instead, it runs GMRES ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
(Show Context)
Abstract. A new hybrid iterative algorithm is proposed for solving large nonsymmetric systems of linear equations. Unlike other hybrid algorithms, which first estimate eigenvalues and then apply this knowledge in further iterations, this algorithm avoids eigenvalue estimates. Instead, it runs GMRES until the residual norm drops by a certain factor, then reapplies the polynomial implicitly constructed by GMRES via a Richardson iteration with Leja ordering. Preliminary experiments suggest that the new algorithm frequently outperforms the restarted GMRES algorithm. Key words,
From Potential Theory To Matrix Iterations In Six Steps
 SIAM REVIEW
"... The theory of the convergence of Krylov subspace iterations for linear systems of equations (conjugate gradients, biconjugate gradients, GMRES, QMR, BiCGSTAB, ...) is reviewed. For a computation of this kind, an estimated asymptotic convergence factor ae 1 can be derived by solving a problem of pot ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
(Show Context)
The theory of the convergence of Krylov subspace iterations for linear systems of equations (conjugate gradients, biconjugate gradients, GMRES, QMR, BiCGSTAB, ...) is reviewed. For a computation of this kind, an estimated asymptotic convergence factor ae 1 can be derived by solving a problem of potential theory or conformal mapping. Six approximations are involved in relating the actual computation to this scalar estimate. These six approximations are discussed in a systematic way and illustrated by a sequence of examples computed with tools of numerical conformal mapping and semidefinite programming.
Stability of the method of lines
 NUMER. MATHEMATIK
, 1992
"... It is well known that a necessary condition for the Laxstability of the method of lines is that the eigenvalues of the spatial discretization operator, scaled by the time step k, lie within a distance O(k) of the stability region of the time integration formula as k ~ O. In this paper we show that ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
It is well known that a necessary condition for the Laxstability of the method of lines is that the eigenvalues of the spatial discretization operator, scaled by the time step k, lie within a distance O(k) of the stability region of the time integration formula as k ~ O. In this paper we show that a necessary and sufficient condition for stability, except for an algebraic factor, is that the epseudoeigenvalues of the same operator lie within a distance O(e)+O(k) of the stability region as k, e ~ O. Our results generalize those of an earlier paper by considering: (a) RungeKutta and other onestep formulas, (b) implicit as well as explicit linear multistep formulas, (c) weighted norms, (d) algebraic stability, (e) finite and infinite time intervals, and (f) stability regions with cusps. In summary, the theory presented in this paper amounts to a transplantation of the Kreiss matrix theorem from the unit disk (for simple power iterations) to an arbitrary stability region (for method of lines calculations).
Pseudospectra of the convectiondiffusion operator
 SIAM J. Appl. Math
, 1994
"... Abstract. The spectrum of the simplest 1D convectiondiffusion operator is a discrete subset of the negative real axis, but the pseudospectra are regions in the complex plane bounded approximately by parabolas. Put another way, the norm of the resolvent is exponentially large as a function of the Pd ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
Abstract. The spectrum of the simplest 1D convectiondiffusion operator is a discrete subset of the negative real axis, but the pseudospectra are regions in the complex plane bounded approximately by parabolas. Put another way, the norm of the resolvent is exponentially large as a function of the Pdclet number throughout a certain approximately parabolic region. These observations have a simple physical basis, and suggest that conventional spectral analysis for convectiondiffusion operators may be of limited value in some applications. Key words, convectiondiffusion operator, Pdclet number, pseudospectra AMS subject classifications.
Convergence of Restarted Krylov Subspaces to Invariant Subspaces
 SIAM J. Matrix Anal. Appl
, 2001
"... The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the eects of polynomial restarting and impose no restrictions concerning ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
(Show Context)
The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the eects of polynomial restarting and impose no restrictions concerning the diagonalizability of the matrix or its degree of nonnormality. Associated with a desired set of eigenvalues is a maximum \reachable invariant subspace" that can be developed from the given starting vector. Convergence for this distinguished subspace is bounded in terms involving a polynomial approximation problem. Elementary results from potential theory lead to convergence rate estimates and suggest restarting strategies based on optimal approximation points (e.g., Leja or Chebyshev points); exact shifts are evaluated within this framework. Computational examples illustrate the utility of these results. Origins of superlinear eects are also described.