Results 1  10
of
36
COMPLEX SYMMETRIC OPERATORS AND APPLICATIONS
, 2005
"... We study a few classes of Hilbert space operators whose matrix representations are complex symmetric with respect to a preferred orthonormal basis. The existence of this additional symmetry has notable implications and, in particular, it explains from a unifying point of view some classical result ..."
Abstract

Cited by 35 (10 self)
 Add to MetaCart
We study a few classes of Hilbert space operators whose matrix representations are complex symmetric with respect to a preferred orthonormal basis. The existence of this additional symmetry has notable implications and, in particular, it explains from a unifying point of view some classical results. We explore applications of this symmetry to Jordan canonical models, selfadjoint extensions of symmetric operators, rankone unitary perturbations of the compressed shift, Darlington synthesis and matrixvalued inner functions, and free bounded analytic interpolation in the disk.
Efficient Algorithms for Computing the Nearest Polynomial with Constrained Roots
, 1998
"... ..."
(Show Context)
Positive polynomials in scalar and matrix variables, the spectral theorem, and optimization
 , in vol. Structured Matrices and Dilations. A Volume Dedicated to the Memory of Tiberiu Constantinescu
"... We follow a stream of the history of positive matrices and positive functionals, as applied to algebraic sums of squares decompositions, with emphasis on the interaction between classical moment problems, function theory of one or several complex variables and modern operator theory. The second par ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
(Show Context)
We follow a stream of the history of positive matrices and positive functionals, as applied to algebraic sums of squares decompositions, with emphasis on the interaction between classical moment problems, function theory of one or several complex variables and modern operator theory. The second part of the survey focuses on recently discovered connections between real algebraic geometry and optimization as well as polynomials in matrix variables and some control theory problems. These new applications have prompted a series of recent studies devoted to the structure of positivity and convexity in a free ∗algebra, the appropriate setting for analyzing inequalities on polynomials having matrix variables. We sketch some of these developments, add to them and comment on the rapidly growing literature.
The Atiyah–Hitchin bracket and open Toda Lattice
 Journal of Geometry and Physics
, 2003
"... The dynamics of finite nonperiodic Toda lattice is an isospectral deformation of the finite three–diagonal Jacobi matrix. It is known since the work of Stieltjes that such matrices are in one–to–one correspondence with their Weyl functions. These are rational functions mapping the upper half–plane i ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
The dynamics of finite nonperiodic Toda lattice is an isospectral deformation of the finite three–diagonal Jacobi matrix. It is known since the work of Stieltjes that such matrices are in one–to–one correspondence with their Weyl functions. These are rational functions mapping the upper half–plane into itself. We consider representations of the Weyl functions as a quotient of two polynomials and exponential representation. We establish a connection between these representations and recently developed algebraic–geometrical approach to the inverse problem for Jacobi matrix. The space of rational functions has natural Poisson structure discovered by Atiyah and Hitchin. We show that an invariance of the AH structure under linear–fractional transformations leads to two systems of canonical coordinates and two families of commuting Hamiltonians. We establish a relation of one of these systems with Jacobi elliptic coordinates.
Pivoting for Structured Matrices and Rational Tangential Interpolation
 CONTEMPORARY MATHEMATICS
"... Gaussian elimination is a standard tool for computing triangular factorizations for general matrices, and thereby solving associated linear systems of equations. As is wellknown, when this classical method is implemented in finiteprecisionarithmetic, it often fails to compute the solution accura ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Gaussian elimination is a standard tool for computing triangular factorizations for general matrices, and thereby solving associated linear systems of equations. As is wellknown, when this classical method is implemented in finiteprecisionarithmetic, it often fails to compute the solution accurately because of the accumulation of small roundoffs accompanying each elementary floating point operation. This problem motivated a number of interesting and important studies in modern numerical linear algebra; for our purposes in this paper we only mention that starting with the breakthrough work of Wilkinson, several pivoting techniques have been proposed to stabilize the numerical behavior of Gaussian elimination. Interestingly, matrix interpretations of many known and new algorithms for various applied problems can be seen as a way of computing triangular factorizations for the associated structured matrices, where different patterns of structure arise in the context of different physical problems. The special structure of such matrices [e.g., Toeplitz, Hankel, Cauchy, Vandermonde, etc.] often
LMI representations of convex semialgebraic sets and determinantal representations of algebraic hypersurfaces: past, present, and future
"... ar ..."
(Show Context)
Reflections on SchurCohn Matrices and JuryMarden Tables and classification of related unitcircle zero location criteria ∗
"... We use the so called reflection coefficients (RC) to examine, review, and classify the SchurCohn and MardenJury (SCMJ) class of tests for determining zero location of a discretetime system polynomial with respect to the unit circle. These parameters are taken as a platform to propose a partition ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
(Show Context)
We use the so called reflection coefficients (RC) to examine, review, and classify the SchurCohn and MardenJury (SCMJ) class of tests for determining zero location of a discretetime system polynomial with respect to the unit circle. These parameters are taken as a platform to propose a partition of the SCMJ class into four useful type of schemes. The four types differ in the sequence of polynomials (i.e. the ‘table’) they associate with the tested polynomials by scaling factors: (A) a sequence of monic polynomials, (B) a sequence of least arithmetic operations, (C) a sequence that produces the principal minors of the SchurCohn matrix and (D) a sequence that avoids division arithmetic. A direct derivation of a zero location rule in terms of the RC is first provided and then used to track a proper zero location rule in terms of the leading coefficients of the polynomials of the B, C and D scheme prototypes. We review many of the published stability tests in the SCMJ class and show that each can be sorted into one of these four types. This process is instrumental in extending some of the tests from stability conditions to zero location, from real to complex polynomial, in providing a proof to tests stated without a proof, or in correcting some inaccuracies. Another interesting outcome of the current approach is that a by product of the developing a zero location rule for the Ctype test is one more proof for the relation between the zero location of a polynomial and the inertia of its SchurCohn matrix. 1.
On the Trace Formula for Quadratic Forms and Some Applications
, 1994
"... This paper deals with a variant of the trace formula for quadratic forms which allows applications to some algorithmic problems of real algebraic geometry. The formula will be applied to the counting of real zeros on 0dimensional varieties under side constraints, as well as to the 0dimensional cas ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper deals with a variant of the trace formula for quadratic forms which allows applications to some algorithmic problems of real algebraic geometry. The formula will be applied to the counting of real zeros on 0dimensional varieties under side constraints, as well as to the 0dimensional case of the BrckerScheiderer result about the description of basic open semialgebraic sets. Furthermore, it can be used to give a 'visible' argument for Tarski's theorem on Quantifier Elimination in the theory of real closed fields. It is only fair to admit that our method is nothing but a modern version of old ideas of HermiteSylvester who had already shown how to count real zeros by calculating signatures of appropriate quadratic forms. What has been added to their approach is a certain algebraic machinery that enables us to treat multivariate problems more uniformly. In an analogous approach P. Pedersen has independently developed a similiar method to count real zeros, also starting with Hermite ideas.
On some Structured Inverse Eigenvalue Problems
 NUMER. ALGORITHMS
, 1995
"... This work deals with various finite algorithms that solve two special Structured Inverse Eigenvalue Problem (Siep). The first problem we consider is the Jacobi Inverse Eigenvalue Problem (Jiep): given some constraints on two sets of real, find a Jacobi matrix J (real symmetric tridiagonal with posi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
This work deals with various finite algorithms that solve two special Structured Inverse Eigenvalue Problem (Siep). The first problem we consider is the Jacobi Inverse Eigenvalue Problem (Jiep): given some constraints on two sets of real, find a Jacobi matrix J (real symmetric tridiagonal with positive nondiagonal entries) that admits as spectrum and principal subspectrum the two given sets. Two classes of finite algorithms are considered. The polynomial algorithm is based on a special EuclidSturm algorithm (Householder's terminology) which has been rediscovered several times. The matrix algorithm is a symmetric Lanczos algorithm with a special initial vector. Some characterization of the matrix insures the equivalence of the two algorithms in exact arithmetic. The results of the symmetric situation are extended to the nonsymmetric case: this is the second Siep which is considered : the Tridiagonal Inverse Eigenvalue Problem (Tiep). Possible breakdowns may occur in the polynomial...
Fast Computation of the Bezout and Dixon Resultant Matrices
 Journal of Computational and Applied Mathematics
"... Efficient algorithms are derived for computing the entries of the Bezout resultant matrix for two univariate polynomials of degree n and for calculating the entries of the DixonCayley resultant matrix for three bivariate polynomials of bidegree (m; n). Standard methods based on explicit formulas re ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Efficient algorithms are derived for computing the entries of the Bezout resultant matrix for two univariate polynomials of degree n and for calculating the entries of the DixonCayley resultant matrix for three bivariate polynomials of bidegree (m; n). Standard methods based on explicit formulas require O(n 3 ) additions and multiplications to compute all the entries of the Bezout resultant matrix. Here we present a new recursive algorithm for computing these entries that uses only O(n 2 ) additions and multiplications. The improvement is even more dramatic in the bivariate setting. Established techniques based on explicit formulas require O(m 4 n 4 ) additions and multiplications to calculate all the entries of the DixonCayley resultant matrix. In contrast, our recursive algorithm for computing these entries uses only O(m 2 n 3 ) additions and multiplications. Keywords: Algebraic Geometry; Computer Graphics; Geometric Modeling; Robotics; Elimination Theory; Resultant 1 ...