Results 1  10
of
114
The geometry of algorithms with orthogonality constraints
 SIAM J. MATRIX ANAL. APPL
, 1998
"... In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal proces ..."
Abstract

Cited by 640 (1 self)
 Add to MetaCart
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.
An Analytical Constant Modulus Algorithm
, 1996
"... Iterative constant modulus algorithms such as Godard and CMA have been used to blindly separate a superposition of cochannel constant modulus (CM) signals impinging on an antenna array. These algorithms have certain deficiencies in the context of convergence to local minima and the retrieval of all ..."
Abstract

Cited by 166 (35 self)
 Add to MetaCart
Iterative constant modulus algorithms such as Godard and CMA have been used to blindly separate a superposition of cochannel constant modulus (CM) signals impinging on an antenna array. These algorithms have certain deficiencies in the context of convergence to local minima and the retrieval of all individual CM signals that are present in the channel. In this paper, we show that the underlying constant modulus factorization problem is, in fact, a generalized eigenvalue problem, and may be solved via a simultaneous diagonalization of a set of matrices. With this new, analytical approach, it is possible to detect the number of CM signals present in the channel, and to retrieve all of them exactly, rejecting other, nonCM signals. Only a modest amount of samples are required. The algorithm is robust in the presence of noise, and is tested on measured data, collected from an experimental setup.
Blind equalization and multiuser detection in dispersive CDMA channels,
 IEEE Trans. Commun.
, 1998
"... ..."
Blind Adaptive Interference Suppression For DirectSequence CDMA
 IEEE TRANS. COMMUN
, 1994
"... Direct Sequence (DS) Code Division Multiple Access (CDMA) is a promising technology for wireless environments with multiple simultaneous transmissions because of several features: asynchronous multiple access, robustness to frequency selective fading, and multipath combining. The capacity ..."
Abstract

Cited by 82 (7 self)
 Add to MetaCart
(Show Context)
Direct Sequence (DS) Code Division Multiple Access (CDMA) is a promising technology for wireless environments with multiple simultaneous transmissions because of several features: asynchronous multiple access, robustness to frequency selective fading, and multipath combining. The capacity
A Numerically Stable, Structure Preserving Method for Computing the Eigenvalues of Real Hamiltonian or Symplectic Pencils
 Numer. Math
, 1996
"... A new method is presented for the numerical computation of the generalized eigenvalues of real Hamiltonian or symplectic pencils and matrices. The method is strongly backward stable, i.e., it is numerically backward stable and preserves the structure (i.e., Hamiltonian or symplectic). In the case of ..."
Abstract

Cited by 75 (33 self)
 Add to MetaCart
(Show Context)
A new method is presented for the numerical computation of the generalized eigenvalues of real Hamiltonian or symplectic pencils and matrices. The method is strongly backward stable, i.e., it is numerically backward stable and preserves the structure (i.e., Hamiltonian or symplectic). In the case of a Hamiltonian matrix the method is closely related to the square reduced method of Van Loan, but in contrast to that method which may suffer from a loss of accuracy of order p ", where " is the machine precision, the new method computes the eigenvalues to full possible accuracy. Keywords. eigenvalue problem, Hamiltonian pencil (matrix), symplectic pencil (matrix), skewHamiltonian matrix AMS subject classification. 65F15 1 Introduction The eigenproblem for Hamiltonian and symplectic matrices has received a lot of attention in the last 25 years, since the landmark papers of Laub [13] and Paige/Van Loan [20]. The reason for this is the importance of this problem in many applications in c...
LowRank Orthogonal Decompositions for Information Retrieval Applications
 Numerical Linear Algebra with Applications
, 1996
"... This paper is organized as follows. Section 2 is a review of basic concepts needed to understand LSI. Section 3 is a discussion of the lowrank ULV algorithm with particular focus on computational complexity and ability to produce good approximations to the singular subspaces of sparse rectangular m ..."
Abstract

Cited by 59 (5 self)
 Add to MetaCart
(Show Context)
This paper is organized as follows. Section 2 is a review of basic concepts needed to understand LSI. Section 3 is a discussion of the lowrank ULV algorithm with particular focus on computational complexity and ability to produce good approximations to the singular subspaces of sparse rectangular matrices. Section 4 uses a constructive example to illustrate how LSI can use the ULV decomposition to represent terms and documents in the same semantic space, how a query is represented, how additional documents are added (or foldedin), and how ULVupdating represents additional documents. Section 5 is a discussion of LULV updating, a procedure based on the LULV algorithm (which has not been previously considered in the literature). In particular, we give an algorithm for ULVupdating along with a comparison to the foldingin process with regard to robustness of query matching and computational complexity. Then, LULV updating is illustrated using a small example. Section 6 is a brief summary and considerations for future work. 14/1/1996 23:59 PAGE PROOFS paper 4 M.W. Berry and R.D. Fierro
Updating a RankRevealing ULV Decomposition
, 1991
"... A ULV decomposition of a matrix A of order n is a decomposition of the form A = ULV^H, where U and V are orthogonal matrices and L is a lower triangular matrix. When A is approximately of rank k, the decomposition is rank revealing if the last n \Gamma k rows of L are small. This paper presents al ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
A ULV decomposition of a matrix A of order n is a decomposition of the form A = ULV^H, where U and V are orthogonal matrices and L is a lower triangular matrix. When A is approximately of rank k, the decomposition is rank revealing if the last n \Gamma k rows of L are small. This paper presents algorithms for updating a rankrevealing ULV decomposition. The algorithms run in O(n²) time, and can be implemented on a linear array of processors to run in O(n) time.
Dimension reduction in text classification with support vector machines
 Journal of Machine Learning Research
, 2005
"... Support vector machines (SVMs) have been recognized as one of the most successful classification methods for many applications including text classification. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of th ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
Support vector machines (SVMs) have been recognized as one of the most successful classification methods for many applications including text classification. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of the feature space, reducing computational complexity is an essential issue to efficiently handle a large number of terms in practical applications of text classification. In this paper, we adopt novel dimension reduction methods to reduce the dimension of the document vectors dramatically. We also introduce decision functions for the centroidbased classification algorithm and support vector classifiers to handle the classification problem where a document may belong to multiple classes. Our substantial experimental results show that with several dimension reduction methods that are designed particularly for clustered data, higher efficiency for both training and testing can be achieved without sacrificing prediction accuracy of text classification even when the dimension of the input space is significantly reduced.
Computing RankRevealing QR Factorizations of Dense Matrices
 Argonne Preprint ANLMCSP5590196, Argonne National Laboratory
, 1996
"... this paper, and we give only a brief synopsis here. For details, the reader is referred to the code. Test matrices 1 through 5 were designed to exercise column pivoting. Matrix 6 was designed to test the behavior of the condition estimation in the presence of clusters for the smallest singular value ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
this paper, and we give only a brief synopsis here. For details, the reader is referred to the code. Test matrices 1 through 5 were designed to exercise column pivoting. Matrix 6 was designed to test the behavior of the condition estimation in the presence of clusters for the smallest singular value. For the other cases, we employed the LAPACK matrix generator xLATMS, which generates random symmetric matrices by multiplying a diagonal matrix with prescribed singular values by random orthogonal matrices from the left and right. For the break1 distribution, all singular values are 1.0 except for one. In the arithmetic and geometric distributions, they decay from 1.0 to a specified smallest singular value in an arithmetic and geometric fashion, respectively. In the "reversed" distributions, the order of the diagonal entries was reversed. For test cases 7 though 12, we used xLATMS to generate a matrix of order