Results 1  10
of
266
The NEURON Simulation Environment
, 1997
"... This article describes the concepts and strategies that have guided the design and implementation of this simulator, with emphasis on those features that are particularly relevant to its most efficient use. 1.1 The problem domain ..."
Abstract

Cited by 146 (9 self)
 Add to MetaCart
This article describes the concepts and strategies that have guided the design and implementation of this simulator, with emphasis on those features that are particularly relevant to its most efficient use. 1.1 The problem domain
An Updating Algorithm for Subspace Tracking
 IEEE Trans. Signal Processing
, 1992
"... In certain signal processing applications it is required to compute the null space of a matrix whose rows are samples of a signal with p components. The usual tool for doing this is the singular value decomposition. However, the singular value decomposition has the drawback that it requires O(p 3 ..."
Abstract

Cited by 99 (13 self)
 Add to MetaCart
In certain signal processing applications it is required to compute the null space of a matrix whose rows are samples of a signal with p components. The usual tool for doing this is the singular value decomposition. However, the singular value decomposition has the drawback that it requires O(p 3 ) operations to recompute when a new sample arrives. In this paper, we show that a different decomposition, called the URV, decomposition is equally effective in exhibiting the null space and can be updated in O(p 2 ) time. The updating technique can be run on a linear array of p processors in O(p) time. 1. Introduction Many problems in digital signal processing require the computation of an approximate null space of an n \Theta p matrix A whose rows represent samples of a signal (see [9] for examples and references). Specifically, we must find an orthogonal matrix V = (V 1 V 2 ) such that 1. AV 1 has no small singular values. 2. AV 2 is small. In this case we say that A has approximate ...
Compiler Blockability of Numerical Algorithms
 IN PROCEEDINGS OF SUPERCOMPUTING '92
, 1992
"... Over the past decade, microprocessor design strategies have focused on increasing the computational power on a single chip. Unfortunately, memory speeds have not kept pace. The result is an imbalance between computation speed and memory speed. This imbalance is leading machine designers to use more ..."
Abstract

Cited by 96 (5 self)
 Add to MetaCart
Over the past decade, microprocessor design strategies have focused on increasing the computational power on a single chip. Unfortunately, memory speeds have not kept pace. The result is an imbalance between computation speed and memory speed. This imbalance is leading machine designers to use more complicated memory hierarchies. In turn, programmers are explicitly restructuring codes to perform well on particular memory systems, leading to machinespecific programs. This paper describes our investigation into compiler technology designed to obviate the need for machinespecific programming. Our results reveal that through the use of compiler optimizations many numerical algorithms can be expressed in a natural form while retaining good memory performance.
SDPA (Semidefinite Programming Algorithm)  User's Manual
, 1995
"... Abstract. The SDPA (SemiDefinite Programming Algorithm) [5] is a software package for solving semidefinite programs (SDPs). It is based on a Mehrotratype predictorcorrector infeasible primaldual interiorpoint method. The SDPA handles the standard form SDP and its dual. It is implemented in C++ l ..."
Abstract

Cited by 95 (28 self)
 Add to MetaCart
Abstract. The SDPA (SemiDefinite Programming Algorithm) [5] is a software package for solving semidefinite programs (SDPs). It is based on a Mehrotratype predictorcorrector infeasible primaldual interiorpoint method. The SDPA handles the standard form SDP and its dual. It is implemented in C++ language utilizing the LAPACK [1] for matrix computations. The SDPA version 7.0.5 enjoys the following features: • Efficient method for computing the search directions when the SDP to be solved is large scale and sparse [4]. • Block diagonal matrix structure and sparse matrix structure are supported for data matrices. • Sparse or dense Cholesky factorization for the Schur matrix is automatically selected. • An initial point can be specified. • Some information on infeasibility of the SDP is provided. This manual and the SDPA can be downloaded from the WWW site
Stochastic complementation, uncoupling Markov chains, and the theory of nearly reducible systems
 SIAM Rev
, 1989
"... Abstract. A concept called stochastic complementation is an idea which occurs naturally, although not always explicitly, in the theory and application of finite Markov chains. This paper brings this idea to the forefront with an explicit definition and a development of some of its properties. Applic ..."
Abstract

Cited by 75 (4 self)
 Add to MetaCart
Abstract. A concept called stochastic complementation is an idea which occurs naturally, although not always explicitly, in the theory and application of finite Markov chains. This paper brings this idea to the forefront with an explicit definition and a development of some of its properties. Applications of stochastic complementation are explored with respect to problems involving uncoupling procedures in the theory of Markov chains. Furthermore, the role of stochastic complementation in the development of the classical Simon–Ando theory of nearly reducible system is presented. Key words. Markov chains, stationary distributions, stochastic matrix, stochastic complementation, nearly reducible systems, Simon–Ando theory
Global Optimization of a Neural Network  Hidden Markov Model Hybrid
 IEEE Transactions on Neural Networks
, 1991
"... In this paper an original method for integrating Artificial Neural Networks (ANN) with Hidden Markov Models (HMM) is proposed. ANNs are suitable to perform phonetic classification, whereas HMMs have been proven successful at modeling the temporal structure of the speech signal. In the approach descr ..."
Abstract

Cited by 69 (16 self)
 Add to MetaCart
In this paper an original method for integrating Artificial Neural Networks (ANN) with Hidden Markov Models (HMM) is proposed. ANNs are suitable to perform phonetic classification, whereas HMMs have been proven successful at modeling the temporal structure of the speech signal. In the approach described here, the ANN outputs constitute the sequence of observation vectors for the HMM. An algorithm is proposed for global optimization of all the parameters. Results on speakerindependent recognition experiments using this integrated ANNHMM system on the TIMIT continuous speech database are reported. 1 Introduction In spite of the fact that speech exhibits features that cannot be represented by a firstorder Markov model, Hidden Markov Models (HMMs) of speech units (e.g., phonemes) have been used with a good degree of success in Automatic Speech Recognition (ASR) (Rabiner & Levinson 85; Lee & Hon 89). Artificial Neural Networks (ANNs) have proven to be useful for classifying speech prop...
A semidefinite framework for trust region subproblems with applications to large scale minimization
 Math. Programming
, 1997
"... This is an abbreviated revision of the University of Waterloo research report CORR 9432. y ..."
Abstract

Cited by 59 (8 self)
 Add to MetaCart
This is an abbreviated revision of the University of Waterloo research report CORR 9432. y
Dualspace linear discriminant analysis for face recognition
 Proc. IEEE Conf. Computer Vision and Pattern Recognition
, 2004
"... Linear Discriminant Analysis (LDA) is a popular feature extraction technique for face recognition. However, it often suffers from the small sample size problem when dealing with the high dimensional face data. Some approaches have been proposed to overcome this problem, but they are often unstable a ..."
Abstract

Cited by 51 (14 self)
 Add to MetaCart
Linear Discriminant Analysis (LDA) is a popular feature extraction technique for face recognition. However, it often suffers from the small sample size problem when dealing with the high dimensional face data. Some approaches have been proposed to overcome this problem, but they are often unstable and have to discard some discriminative information. In this paper, a dualspace LDA approach for face recognition is proposed to take full advantage of the discriminative information in the face space. Based on a probabilistic visual model, the eigenvalue spectrum in the null space of withinclass scatter matrix is estimated, and discriminant analysis is simultaneously applied in the principal and null subspaces of the withinclass scatter matrix. The two sets of discriminative features are then combined for recognition. It outperforms existing LDA approaches. 1.
MemoryHierarchy Management
, 1994
"... The trend in highperformance microprocessor design is toward increasing computational power on the chip. Microprocessors can now process dramatically more data per machine cycle than previous models. Unfortunately, memory speeds have not kept pace. The result is an imbalance between computation spe ..."
Abstract

Cited by 50 (14 self)
 Add to MetaCart
The trend in highperformance microprocessor design is toward increasing computational power on the chip. Microprocessors can now process dramatically more data per machine cycle than previous models. Unfortunately, memory speeds have not kept pace. The result is an imbalance between computation speed and memory speed. This imbalance is leading machine designers to use more complicated memory hierarchies. In turn, programmers are explicitly restructuring codes to perform well on particular memory systems, leading to machinespecific programs. It is our belief that machinespecific programming is a step in the wrong direction. Compilers, not programmers, should handle machinespecific implementation details. To this end, this thesis develops and experiments with compiler algorithms that manage the memory hierarchy of a machine for floatingpoint intensive numerical codes. Specifically, we address the following issues: Scalar replacement. Lack of information concerning the flow of arra...
A Stable And Fast Algorithm For Updating The Singular Value Decomposition
, 1994
"... . Let A 2 R m\Thetan be a matrix with known singular values and singular vectors, and let A 0 be the matrix obtained by appending a row to A. We present stable and fast algorithms for computing the singular values and the singular vectors of A 0 in O \Gamma (m + n) min(m;n) log 2 2 ffl \De ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
. Let A 2 R m\Thetan be a matrix with known singular values and singular vectors, and let A 0 be the matrix obtained by appending a row to A. We present stable and fast algorithms for computing the singular values and the singular vectors of A 0 in O \Gamma (m + n) min(m;n) log 2 2 ffl \Delta floating point operations, where ffl is the machine precision. Previous algorithms can be unstable and compute the singular values and the singular vectors of A 0 in O \Gamma (m + n) min 2 (m;n) \Delta floating point operations. 1. Introduction. The singular value decomposition (SVD) of a matrix A 2 R m\Thetan is A = U\Omega V T ; (1.1) where U 2 R m\Thetam and V 2 R n\Thetan are orthonormal; and\Omega 2 R m\Thetan is zero except on the main diagonal, which has nonnegative entries in decreasing order. The columns of U and V are the left singular vectors and the right singular vectors of A, respectively; the diagonal entries of\Omega are the singular values of A....