Results 1  10
of
389
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems
 SIAM J. Sci. Stat. Comput
, 1986
"... Abstract. We present an iterative method for solving linear systems, which has the property ofminimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an /2orthogonal basis of Krylov subspaces. It can be cons ..."
Abstract

Cited by 1489 (40 self)
 Add to MetaCart
Abstract. We present an iterative method for solving linear systems, which has the property ofminimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an /2orthogonal basis of Krylov subspaces. It can be considered as a generalization of Paige and Saunders ’ MINRES algorithm and is theoretically equivalent to the Generalized Conjugate Residual (GCR) method and to ORTHODIR. The new algorithm presents several advantages over GCR and ORTHODIR.
Estimating threedimensional motion parameters of a rigid planar patch, III: Finite point correspondences and threeview problem
 in Proc. IEEE Int. Conf ASSP
"... noise electron tubes, and superconductive parametric and storage devices. In 1965 he performed an experiment which first proved the existence of the Magnus force in superconductors and was a corecipient of the RCA Research Award for the development of a superconductive parametric amplifier. In 19 ..."
Abstract

Cited by 341 (2 self)
 Add to MetaCart
noise electron tubes, and superconductive parametric and storage devices. In 1965 he performed an experiment which first proved the existence of the Magnus force in superconductors and was a corecipient of the RCA Research Award for the development of a superconductive parametric amplifier. In 1966 he became Director of Advanced
The NEURON Simulation Environment
, 1997
"... This article describes the concepts and strategies that have guided the design and implementation of this simulator, with emphasis on those features that are particularly relevant to its most efficient use. 1.1 The problem domain ..."
Abstract

Cited by 173 (10 self)
 Add to MetaCart
This article describes the concepts and strategies that have guided the design and implementation of this simulator, with emphasis on those features that are particularly relevant to its most efficient use. 1.1 The problem domain
An Updating Algorithm for Subspace Tracking
 IEEE Trans. Signal Processing
, 1992
"... In certain signal processing applications it is required to compute the null space of a matrix whose rows are samples of a signal with p components. The usual tool for doing this is the singular value decomposition. However, the singular value decomposition has the drawback that it requires O(p 3 ..."
Abstract

Cited by 100 (13 self)
 Add to MetaCart
In certain signal processing applications it is required to compute the null space of a matrix whose rows are samples of a signal with p components. The usual tool for doing this is the singular value decomposition. However, the singular value decomposition has the drawback that it requires O(p 3 ) operations to recompute when a new sample arrives. In this paper, we show that a different decomposition, called the URV, decomposition is equally effective in exhibiting the null space and can be updated in O(p 2 ) time. The updating technique can be run on a linear array of p processors in O(p) time. 1. Introduction Many problems in digital signal processing require the computation of an approximate null space of an n \Theta p matrix A whose rows represent samples of a signal (see [9] for examples and references). Specifically, we must find an orthogonal matrix V = (V 1 V 2 ) such that 1. AV 1 has no small singular values. 2. AV 2 is small. In this case we say that A has approximate ...
SDPA (SemiDefinite Programming Algorithm) User's Manual  Version 7.0.5
, 2008
"... The SDPA (SemiDefinite Programming Algorithm) [5] is a software package for solving semidefinite programs (SDPs). It is based on a Mehrotratype predictorcorrector infeasible primaldual interiorpoint method. The SDPA handles the standard form SDP and its dual. It is implemented in C++ language u ..."
Abstract

Cited by 100 (30 self)
 Add to MetaCart
The SDPA (SemiDefinite Programming Algorithm) [5] is a software package for solving semidefinite programs (SDPs). It is based on a Mehrotratype predictorcorrector infeasible primaldual interiorpoint method. The SDPA handles the standard form SDP and its dual. It is implemented in C++ language utilizing the LAPACK [1] for matrix computations. The SDPA version 7.0.5 enjoys the following features: • Efficient method for computing the search directions when the SDP to be solved is large scale and sparse [4]. • Block diagonal matrix structure and sparse matrix structure are supported for data matrices. • Sparse or dense Cholesky factorization for the Schur matrix is automatically selected. • An initial point can be specified. • Some information on infeasibility of the SDP is provided. This manual and the SDPA can be downloaded from the WWW site
Compiler Blockability of Numerical Algorithms
 IN PROCEEDINGS OF SUPERCOMPUTING '92
, 1992
"... Over the past decade, microprocessor design strategies have focused on increasing the computational power on a single chip. Unfortunately, memory speeds have not kept pace. The result is an imbalance between computation speed and memory speed. This imbalance is leading machine designers to use more ..."
Abstract

Cited by 97 (5 self)
 Add to MetaCart
Over the past decade, microprocessor design strategies have focused on increasing the computational power on a single chip. Unfortunately, memory speeds have not kept pace. The result is an imbalance between computation speed and memory speed. This imbalance is leading machine designers to use more complicated memory hierarchies. In turn, programmers are explicitly restructuring codes to perform well on particular memory systems, leading to machinespecific programs. This paper describes our investigation into compiler technology designed to obviate the need for machinespecific programming. Our results reveal that through the use of compiler optimizations many numerical algorithms can be expressed in a natural form while retaining good memory performance.
R.(1993): '' A Test for the Number of Factors in an Approximate Factor Model
 Journal of Finance
"... An important issue in applications of multifactor models of asset returns is the appropriate number of factors. Most extant tests for the number of factors are valid only for strict factor models, in which diversifiable returns are uncorrelated across assets. In this paper we develop a test statisti ..."
Abstract

Cited by 94 (8 self)
 Add to MetaCart
An important issue in applications of multifactor models of asset returns is the appropriate number of factors. Most extant tests for the number of factors are valid only for strict factor models, in which diversifiable returns are uncorrelated across assets. In this paper we develop a test statistic to determine the number of factors in an approximate factor model of asset returns, which does not require that diversifiable components of returns be uncorrelated across assets. We find evidence for one to six pervasive factors in the crosssection of New York Stock Exchange and American Stock Exchange stock returns. THE ARBITRAGE PRICING THEORY (APT) of Ross (1976) has generated an increased interest in the application of linear factor models in the study of capital asset pricing. The APT has the attractive feature that it makes a minimal number of assumptions about the nature of the economy (a factor structure for the returns generating process, a large number of assets, and frictionless trading). The costs of these minimalist assumptions include certain ambiguities such as an approximate pricing relation and an unknown number of pervasive factors. In order to estimate and test the APT, one must specify the number of pervasive factors in asset returns. The issue of the appropriate number of factors has been the subject of some controversy (see, for example, Roll and
Stochastic complementation, uncoupling Markov chains, and the theory of nearly reducible systems
 SIAM Rev
, 1989
"... Abstract. A concept called stochastic complementation is an idea which occurs naturally, although not always explicitly, in the theory and application of finite Markov chains. This paper brings this idea to the forefront with an explicit definition and a development of some of its properties. Applic ..."
Abstract

Cited by 80 (4 self)
 Add to MetaCart
Abstract. A concept called stochastic complementation is an idea which occurs naturally, although not always explicitly, in the theory and application of finite Markov chains. This paper brings this idea to the forefront with an explicit definition and a development of some of its properties. Applications of stochastic complementation are explored with respect to problems involving uncoupling procedures in the theory of Markov chains. Furthermore, the role of stochastic complementation in the development of the classical Simon–Ando theory of nearly reducible system is presented. Key words. Markov chains, stationary distributions, stochastic matrix, stochastic complementation, nearly reducible systems, Simon–Ando theory
Global Optimization of a Neural Network  Hidden Markov Model Hybrid
 IEEE Transactions on Neural Networks
, 1991
"... In this paper an original method for integrating Artificial Neural Networks (ANN) with Hidden Markov Models (HMM) is proposed. ANNs are suitable to perform phonetic classification, whereas HMMs have been proven successful at modeling the temporal structure of the speech signal. In the approach descr ..."
Abstract

Cited by 70 (16 self)
 Add to MetaCart
In this paper an original method for integrating Artificial Neural Networks (ANN) with Hidden Markov Models (HMM) is proposed. ANNs are suitable to perform phonetic classification, whereas HMMs have been proven successful at modeling the temporal structure of the speech signal. In the approach described here, the ANN outputs constitute the sequence of observation vectors for the HMM. An algorithm is proposed for global optimization of all the parameters. Results on speakerindependent recognition experiments using this integrated ANNHMM system on the TIMIT continuous speech database are reported. 1 Introduction In spite of the fact that speech exhibits features that cannot be represented by a firstorder Markov model, Hidden Markov Models (HMMs) of speech units (e.g., phonemes) have been used with a good degree of success in Automatic Speech Recognition (ASR) (Rabiner & Levinson 85; Lee & Hon 89). Artificial Neural Networks (ANNs) have proven to be useful for classifying speech prop...
A semidefinite framework for trust region subproblems with applications to large scale minimization
 Math. Programming
, 1997
"... This is an abbreviated revision of the University of Waterloo research report CORR 9432. y ..."
Abstract

Cited by 63 (9 self)
 Add to MetaCart
(Show Context)
This is an abbreviated revision of the University of Waterloo research report CORR 9432. y