Results 1  10
of
17
Two purposes for matrix factorization: A historical appraisal
 SIAM Review
"... Abstract. Matrix factorization in numerical linear algebra (NLA) typically serves the purpose of restating some given problem in such a way that it can be solved more readily; for example, one major application is in the solution of a linear system of equations. In contrast, within applied statistic ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Abstract. Matrix factorization in numerical linear algebra (NLA) typically serves the purpose of restating some given problem in such a way that it can be solved more readily; for example, one major application is in the solution of a linear system of equations. In contrast, within applied statistics/psychometrics (AS/P), a much more common use for matrix factorization is in presenting, possibly spatially, the structure that may be inherent in a given data matrix obtained on a collection of objects observed over a set of variables. The actual components of a factorization are now of prime importance and not just as a mechanism for solving another problem. We review some connections between NLA and AS/P and their respective concerns with matrix factorization and the subsequent rank reduction of a matrix. We note in particular that several results available for many decades in AS/P were more recently (re)discovered in the NLA literature. Two other distinctions between NLA and AS/P are also discussed briefly: how a generalized singular value decomposition might be defined, and the differing uses for the (newer) methods of optimization based on cyclic or iterative projections.
Eigenvalue Computation in the 20th Century
 JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS
, 2000
"... This paper sketches the main research developments in the area of computational methods for eigenvalue problems during the 20th century. The earliest of such methods dates back to work of Jacobi in the middle of the nineteenth century. Since computing eigenvalues and vectors is essentially more c ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper sketches the main research developments in the area of computational methods for eigenvalue problems during the 20th century. The earliest of such methods dates back to work of Jacobi in the middle of the nineteenth century. Since computing eigenvalues and vectors is essentially more complicated than solving linear systems, it is not surprising that highly significant developments in this area started with the introduction of electronic computers around 1950. In the early decades of this century, however, important theoretical developments had been made from which computational techniques could grow. Research in this area of numerical linear algebra is very active, since there is a heavy demand for solving complicated problems associated with stability and perturbation analysis for practical applications.
The MoorePenrose generalized inverse for sums of matrices
 SIAM J. Matrix Anal. Appl
, 1999
"... In this paper we exhibit, under suitable conditions, a neat relationship between the Moore–Penrose generalized inverse of a sum of two matrices and the Moore–Penrose generalized inverses of the individual terms. We include an application to the parallel sum of matrices. AMS 1991 subject classificati ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper we exhibit, under suitable conditions, a neat relationship between the Moore–Penrose generalized inverse of a sum of two matrices and the Moore–Penrose generalized inverses of the individual terms. We include an application to the parallel sum of matrices. AMS 1991 subject classifications. Primary 15A09; secondary 15A18. Key words and phrases. Moore–Penrose generalized inverse, Sherman–Morrison– Woodbury formula, singular value decomposition, rank additivity, parallel sum.
A Fast+Practical+Deterministic Algorithm for Triangularizing Integer Matrices
, 1996
"... This paper presents a new algorithm for computing the row reduced echelon form triangularization H of an n \Theta m integer input matrix A. The cost of the algorithm is O(nmr 2 log 2 rjjAjj + r 4 log 3 rjjAjj) bit operations where r is the rank of A and jjAjj = max ij jA ij j. This complexi ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper presents a new algorithm for computing the row reduced echelon form triangularization H of an n \Theta m integer input matrix A. The cost of the algorithm is O(nmr 2 log 2 rjjAjj + r 4 log 3 rjjAjj) bit operations where r is the rank of A and jjAjj = max ij jA ij j. This complexity result assumes standard (quadratic) integer arithmetic but still matches, in the paramaters n, m and r, the best bit complexity we can reasonably hope for under the assumption of standard matrix arithmetic. A unimodular transforming matrix U which satisfies UA = H is also computed within the same running time. As a direct application of our triangularization algorithm we give a fast algorithm for solving a system A~x = ~ b of linear Diophantine equations. The algorithms presented here are both fast and practical. They are easily implemented, handle the case of input matrices having arbitrary shape and rank profile, and allow integer arithmetic to be performed in a residue number system. ...
On a Classical Method for Computing Eigenvectors
, 1995
"... One of the oldest methods for computing an eigenvector of a matrix F is based on the solution of a set of homogeneous equations which can be traced back to the times of Cauchy (1829). The principal difficulty of this approach was identified by Wilkinson (1958). We remove this obstacle and analyse th ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
One of the oldest methods for computing an eigenvector of a matrix F is based on the solution of a set of homogeneous equations which can be traced back to the times of Cauchy (1829). The principal difficulty of this approach was identified by Wilkinson (1958). We remove this obstacle and analyse the viability of this classical method. The key to the analysis is provided by the reciprocals of the diagonal elements of the inverse of the matrix F \Gamma øI, where ø is a shift, approximating an eigenvalue. The final missing link is a perturbation result due to Sherman and Morrison who proved that F \Gamma 1=fF \Gamma1 g j;i e i e j is singular. We extend this result to the block case. Finally, we give a new impetus for Rayleigh quotient and Laguerre iterations. 1 Introduction and Summary An eigenvector, x, corresponding to an eigenvalue, , of a general dense matrix F can be obtained by solving the homogeneous system of equations (F \Gamma I)x = 0. The conventional method of solving h...
GIANFRANCO CIMMINO’S CONTRIBUTIONS TO NUMERICAL MATHEMATICS ∗
"... Abstract. Gianfranco Cimmino (19081989) authored several papers in the field of numerical analysis, and particularly in the area of matrix computations. His most important contribution in this field is the iterative method for solving linear algebraic systems that bears his name, published in 1938. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. Gianfranco Cimmino (19081989) authored several papers in the field of numerical analysis, and particularly in the area of matrix computations. His most important contribution in this field is the iterative method for solving linear algebraic systems that bears his name, published in 1938. This paper reviews Cimmino’s main contributions to numerical mathematics, together with subsequent developments inspired by his work. Some background information on Italian mathematics and on Mauro Picone’s Istituto Nazionale per le Applicazioni del Calcolo, where Cimmino’s early numerical work took place, is provided. The lasting importance of Cimmino’s work in various application areas is demonstrated by an analysis of citation patterns in the broad technical and scientific literature. Key words. Cimmino’s method, history of numerical linear algebra AMS subject classifications. Primary 0108, 01A60. Secondary 65F10, 65R30. 1. Introduction. Gianfranco Cimmino
On the probability distribution of the optimum of a random linear program
 SIAM J. on Control
, 1966
"... In the present paper we shall consider linear programming problems μ =maxc ′ x Ax = b, x ≥ 0, (1.1) ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In the present paper we shall consider linear programming problems μ =maxc ′ x Ax = b, x ≥ 0, (1.1)
KaBroadband Satellite Communication Using Cyclostationary Parabolic Beamforming
, 1997
"... The purpose of this document is to investigate the design of a broadband satellite system which operates at the Ka frequency band. A channel model was developed which indicated that the satellite link would be noise limited and slowly time varying. Cyclostationary beamforming using the CrossSCORE ( ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The purpose of this document is to investigate the design of a broadband satellite system which operates at the Ka frequency band. A channel model was developed which indicated that the satellite link would be noise limited and slowly time varying. Cyclostationary beamforming using the CrossSCORE (Self Coherent Property Restoral) algorithm on a high gain multifeed parabolic antenna array was considered. The slowly time varying channel environment allowed for a long correlation time. This application of SCORE is in contrast to existing work which uses linear arrays in interference limited environments with short correlation times.
MULTIVARIATE ANALYSIS OF VARIANCE USING PATTERNED COVARIANCE MATRICES
, 1969
"... This study is concerned with the development, examination and comparison of statistics for testing linear hypotheses about the means in the context of the multivariate normal linear model when the covariance matrix is known to conform to a linear pattern. Two likelihood ratio criteria appropriate fo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This study is concerned with the development, examination and comparison of statistics for testing linear hypotheses about the means in the context of the multivariate normal linear model when the covariance matrix is known to conform to a linear pattern. Two likelihood ratio criteria appropriate for the problem are put forward initially, the standard criterion of Wilks assuming no restrictions on the covariance matrix and the criterion assuming the full set of restrictions implied by the linear pattern. It is found that in general the latter criterion can be calculated only by a lengthy iterative procedure. A third "intertilediate " likelihood ratio creterion assuming some but not all of the linear restrictions is proposed as a way of taking advantage of knowledge about the covariance matrix while avoiding the complex computations required for the second statistic. It is noted that for some conbinations of