Results 1  10
of
673
Stable signal recovery from incomplete and inaccurate measurements,”
 Comm. Pure Appl. Math.,
, 2006
"... Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To r ..."
Abstract

Cited by 1397 (38 self)
 Add to MetaCart
Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n
Faster Compressed Sparse Row (CSR)based Sparse MatrixVector Multiplication using CUDA
"... performance evaluation ..."
A column approximate minimum degree ordering algorithm
, 2000
"... Sparse Gaussian elimination with partial pivoting computes the factorization PAQ = LU of a sparse matrix A, where the row ordering P is selected during factorization using standard partial pivoting with row interchanges. The goal is to select a column preordering, Q, based solely on the nonzero patt ..."
Abstract

Cited by 318 (52 self)
 Add to MetaCart
Sparse Gaussian elimination with partial pivoting computes the factorization PAQ = LU of a sparse matrix A, where the row ordering P is selected during factorization using standard partial pivoting with row interchanges. The goal is to select a column preordering, Q, based solely on the nonzero
Solving unsymmetric sparse systems of linear equations with PARDISO
 Journal of Future Generation Computer Systems
, 2004
"... Supernode partitioning for unsymmetric matrices together with complete block diagonal supernode pivoting and asynchronous computation can achieve high gigaflop rates for parallel sparse LU factorization on shared memory parallel computers. The progress in weighted graph matching algorithms helps to ..."
Abstract

Cited by 198 (12 self)
 Add to MetaCart
Supernode partitioning for unsymmetric matrices together with complete block diagonal supernode pivoting and asynchronous computation can achieve high gigaflop rates for parallel sparse LU factorization on shared memory parallel computers. The progress in weighted graph matching algorithms helps
Dealing with Dense Rows in the Solution of Sparse Linear Least Squares Problems
, 1995
"... Sparse linear least squares problems containing a few relatively dense rows occur frequently in practice. Straightforward solution of these problems could cause catastrophic fill and delivers extremely poor performance. This paper studies a scheme for solving such problems efficiently by handling de ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Sparse linear least squares problems containing a few relatively dense rows occur frequently in practice. Straightforward solution of these problems could cause catastrophic fill and delivers extremely poor performance. This paper studies a scheme for solving such problems efficiently by handling
AN EFFICIENT STORAGE FORMAT FOR LARGE SPARSE MATRICES
"... Abstract. In this paper we consider linear system Ax = b where A is a large sparse matrix. A new e ¢ cient, simple and inexpensive method for storage of coe ¢ cient matrix A was presented. The purpose of this method is to reduce the storage volume of large nonsymmetric sparse matrices. The results ..."
Abstract
 Add to MetaCart
shows that the proposed method is very inexpensive in comparison with current methods such as Coordinate format, Compressed Sparse Row (CSR) format and Modi…ed Sparse Row (MSR) format.
Computing A Sparse Jacobian Matrix By Rows And Columns
, 1995
"... this paper we show that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation. A graphtheoretic characterization of the problem is given. KEY WORDS: AD, Forward and Reverse mode, Nonlinear Optimization, Numerical Diffe ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
this paper we show that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation. A graphtheoretic characterization of the problem is given. KEY WORDS: AD, Forward and Reverse mode, Nonlinear Optimization, Numerical
Row modifications of a sparse Cholesky factorization
 SIAM J. Matrix Anal. Appl
, 2005
"... Abstract. Given a sparse, symmetric positive definite matrix C and an associated sparse Cholesky factorization LDLT, we develop sparse techniques for updating the factorization after a symmetric modification of a row and column of C. We show how the modification in the Cholesky factorization associa ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
Abstract. Given a sparse, symmetric positive definite matrix C and an associated sparse Cholesky factorization LDLT, we develop sparse techniques for updating the factorization after a symmetric modification of a row and column of C. We show how the modification in the Cholesky factorization
Ranksparsity incoherence for matrix decomposition
, 2010
"... Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, and is intractable ..."
Abstract

Cited by 230 (21 self)
 Add to MetaCart
principle between the sparsity pattern of a matrix and its row and column spaces, and use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse
Results 1  10
of
673