Results 1  10
of
34
Imperfect Competition, Information Heterogeneity, and Financial Contagion,” Working Paper
, 2004
"... This study examines how heterogeneity of private information may induce financial contagion. Using a model of multiasset trading in which the three main channels of contagion through financial linkages in the literature (correlated information, correlated liquidity, and portfolio rebalancing) are ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
This study examines how heterogeneity of private information may induce financial contagion. Using a model of multiasset trading in which the three main channels of contagion through financial linkages in the literature (correlated information, correlated liquidity, and portfolio rebalancing) are ruled out by construction, I show that financial contagion can still be an equilibrium outcome when speculators receive heterogeneous fundamental information. Riskneutral speculators trade strategically across many assets to mask their information advantage about one asset. Asymmetric sharing of information among them prevents rational market makers from learning about their individual signals and trades with sufficient accuracy. Incorrect crossinference about terminal payoffs and contagion ensue. When used to analyze the transmission of shocks across countries, my model suggests that the process of generation and disclosure of information in emerging markets may explain their vulnerability to financial contagion (JEL D82, G14, G15). Many recent financial crises were initiated by episodes of ‘‘local’ ’ turmoil
Two purposes for matrix factorization: A historical appraisal
 SIAM Review
"... Abstract. Matrix factorization in numerical linear algebra (NLA) typically serves the purpose of restating some given problem in such a way that it can be solved more readily; for example, one major application is in the solution of a linear system of equations. In contrast, within applied statistic ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Abstract. Matrix factorization in numerical linear algebra (NLA) typically serves the purpose of restating some given problem in such a way that it can be solved more readily; for example, one major application is in the solution of a linear system of equations. In contrast, within applied statistics/psychometrics (AS/P), a much more common use for matrix factorization is in presenting, possibly spatially, the structure that may be inherent in a given data matrix obtained on a collection of objects observed over a set of variables. The actual components of a factorization are now of prime importance and not just as a mechanism for solving another problem. We review some connections between NLA and AS/P and their respective concerns with matrix factorization and the subsequent rank reduction of a matrix. We note in particular that several results available for many decades in AS/P were more recently (re)discovered in the NLA literature. Two other distinctions between NLA and AS/P are also discussed briefly: how a generalized singular value decomposition might be defined, and the differing uses for the (newer) methods of optimization based on cyclic or iterative projections.
On iterating linear transformations over recognizable sets of integers
 Theoretical Computer Science
"... It has been known for a long time that the sets of integer vectors that are recognizable by finitestate automata are those that can be defined in an extension of Presburger arithmetic. In this paper, we address the problem of deciding whether the closure of a linear transformation preserves the re ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
It has been known for a long time that the sets of integer vectors that are recognizable by finitestate automata are those that can be defined in an extension of Presburger arithmetic. In this paper, we address the problem of deciding whether the closure of a linear transformation preserves the recognizable nature of sets of integer vectors. We solve this problem by introducing an original extension of the concept of recognizability to sets of vectors with complex components. This generalization allows to obtain a simple necessary and sufficient condition over linear transformations, in terms of the eigenvalues of the transformation matrix. We then show that these eigenvalues do not need to be computed explicitly in order to evaluate the condition, and we give a full decision procedure based on simple integer arithmetic. The proof of this result is constructive, and can be turned into an algorithm for applying the closure of a linear transformation that satisfies the condition to a finitestate representation of a set. Finally, we show that the necessary and sufficient condition that we have obtained can straightforwardly be turned into a sufficient condition for linear transformations with linear guards. Key words: automata, iterations, Presburger arithmetic, recognizable sets of integers
The MoorePenrose generalized inverse for sums of matrices
 SIAM J. Matrix Anal. Appl
, 1999
"... In this paper we exhibit, under suitable conditions, a neat relationship between the Moore–Penrose generalized inverse of a sum of two matrices and the Moore–Penrose generalized inverses of the individual terms. We include an application to the parallel sum of matrices. AMS 1991 subject classificati ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
In this paper we exhibit, under suitable conditions, a neat relationship between the Moore–Penrose generalized inverse of a sum of two matrices and the Moore–Penrose generalized inverses of the individual terms. We include an application to the parallel sum of matrices. AMS 1991 subject classifications. Primary 15A09; secondary 15A18. Key words and phrases. Moore–Penrose generalized inverse, Sherman–Morrison– Woodbury formula, singular value decomposition, rank additivity, parallel sum.
GIANFRANCO CIMMINO’S CONTRIBUTIONS TO NUMERICAL MATHEMATICS
, 2004
"... Gianfranco Cimmino (19081989) authored several papers in the field of numerical analysis, and particularly in the area of matrix computations. His most important contribution in this field is the iterative method for solving linear algebraic systems that bears his name, published in 1938. This pape ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
Gianfranco Cimmino (19081989) authored several papers in the field of numerical analysis, and particularly in the area of matrix computations. His most important contribution in this field is the iterative method for solving linear algebraic systems that bears his name, published in 1938. This paper reviews Cimmino’s main contributions to numerical mathematics, together with subsequent developments inspired by his work. Some background information on Italian mathematics and on Mauro Picone’s Istituto Nazionale per le Applicazioni del Calcolo, where Cimmino’s early numerical work took place, is provided. The lasting importance of Cimmino’s work in various application areas is demonstrated by an analysis of citation patterns in the broad technical and scientific literature.
Eigenvalue Computation in the 20th Century
 JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS
, 2000
"... This paper sketches the main research developments in the area of computational methods for eigenvalue problems during the 20th century. The earliest of such methods dates back to work of Jacobi in the middle of the nineteenth century. Since computing eigenvalues and vectors is essentially more c ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This paper sketches the main research developments in the area of computational methods for eigenvalue problems during the 20th century. The earliest of such methods dates back to work of Jacobi in the middle of the nineteenth century. Since computing eigenvalues and vectors is essentially more complicated than solving linear systems, it is not surprising that highly significant developments in this area started with the introduction of electronic computers around 1950. In the early decades of this century, however, important theoretical developments had been made from which computational techniques could grow. Research in this area of numerical linear algebra is very active, since there is a heavy demand for solving complicated problems associated with stability and perturbation analysis for practical applications.
MaximumEntropy Spatial Processing of Array Data
 Geophysics
, 1974
"... The procedure of maximumentropy spectral analysis (MESA), used in the processing of time series data, also applies to wavenumber (bearing) analysis of signals received from a spatially distributed linear array of sensors. The method is precisely the use of autoregressive spectral analysis in the ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
The procedure of maximumentropy spectral analysis (MESA), used in the processing of time series data, also applies to wavenumber (bearing) analysis of signals received from a spatially distributed linear array of sensors. The method is precisely the use of autoregressive spectral analysis in the space dimension rather than in 1 Ime. There are also close links to the predictive deconvolution method used in geophysical work, and to the process of constructing noisewhitening filters in communication theory, as well as to leastsquares model building. In this note, we review the maximumentropy procedure pointing out all these links. The specific algorithm appropriate to a uniformly spaced line array of sensors is given, as well as one possible algorithm for use in the case of nonuniform sensor spacing.
A Fast+Practical+Deterministic Algorithm for Triangularizing Integer Matrices
, 1996
"... This paper presents a new algorithm for computing the row reduced echelon form triangularization H of an n \Theta m integer input matrix A. The cost of the algorithm is O(nmr 2 log 2 rjjAjj + r 4 log 3 rjjAjj) bit operations where r is the rank of A and jjAjj = max ij jA ij j. This complexi ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper presents a new algorithm for computing the row reduced echelon form triangularization H of an n \Theta m integer input matrix A. The cost of the algorithm is O(nmr 2 log 2 rjjAjj + r 4 log 3 rjjAjj) bit operations where r is the rank of A and jjAjj = max ij jA ij j. This complexity result assumes standard (quadratic) integer arithmetic but still matches, in the paramaters n, m and r, the best bit complexity we can reasonably hope for under the assumption of standard matrix arithmetic. A unimodular transforming matrix U which satisfies UA = H is also computed within the same running time. As a direct application of our triangularization algorithm we give a fast algorithm for solving a system A~x = ~ b of linear Diophantine equations. The algorithms presented here are both fast and practical. They are easily implemented, handle the case of input matrices having arbitrary shape and rank profile, and allow integer arithmetic to be performed in a residue number system. ...
On a Classical Method for Computing Eigenvectors
"... One of the oldest methods for computing an eigenvector of a matrix F is based on the solution of a set of homogeneous equations which can be traced back to the times of Cauchy (1829). The principal di culty of this approach was identi ed by Wilkinson (1958). We remove this obstacle and analyse the v ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
One of the oldest methods for computing an eigenvector of a matrix F is based on the solution of a set of homogeneous equations which can be traced back to the times of Cauchy (1829). The principal di culty of this approach was identi ed by Wilkinson (1958). We remove this obstacle and analyse the viability of this classical method. The key to the analysis is provided by the reciprocals of the diagonal elements of the inverse of the matrix F; I, where is a shift, approximating an eigenvalue. The nal missing link is a perturbation result due to Sherman and Morrison who proved that F; 1=fF;1 gj�ieiej is singular. We extend this result to the block case. Finally, we give a new impetus for Rayleigh quotient and Laguerre iterations.