Results 1  10
of
5,292
Improved FOCUSS Method With Conjugate Gradient Iterations
"... Abstract—FOCal Underdetermined System Solver (FOCUSS) is a powerful tool for sparse representation and underdetermined inverse problems. In this correspondence, we strengthen the FOCUSS method with the following main contributions: 1) we give a more rigorous derivation of the FOCUSS for the sparsit ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
CUSS for the sparsity parameter 0 1 by a nonlinear transform and 2) we develop the CGFOCUSS by incorporating the conjugate gradient (CG) method to the FOCUSS, which significantly reduces a computational cost with respect to the standard FOCUSS and extends its availability for large scale problems. We justify the CG
CGIHT: Conjugate Gradient Iterative Hard Thresholding for Compressed Sensing and Matrix Completion
, 2014
"... We introduce the Conjugate Gradient Iterative Hard Thresholding (CGIHT) family of algorithms for the efficient solution of constrained underdetermined linear systems of equations arising in compressed sensing, row sparse approximation, and matrix completion. CGIHT is designed to balance the low per ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We introduce the Conjugate Gradient Iterative Hard Thresholding (CGIHT) family of algorithms for the efficient solution of constrained underdetermined linear systems of equations arising in compressed sensing, row sparse approximation, and matrix completion. CGIHT is designed to balance the low per
DISCREPANCY PRINCIPLE FOR STATISTICAL INVERSE PROBLEMS WITH APPLICATION TO CONJUGATE GRADIENT ITERATION
"... who passed away too early at the age of 60. Abstract. The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is welldefined, however a plain use of it may occasionally ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
occasionally fail and it will yield suboptimal rates. Therefore, a modification of the discrepancy is introduced, which takes into account both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration it is shown to yield order optimal a priori
The Role of the Inner Product in Stopping Criteria for Conjugate Gradient Iterations
, 1999
"... Two natural and e#cient stopping criteria are derived for conjugate gradient (CG) methods, based on iteration parameters. The derivation makes use of the inner product matrix B defining the CG method. In particular, the relationship between the eigenvalues and Bnorm of a matrix is investigated, and ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Two natural and e#cient stopping criteria are derived for conjugate gradient (CG) methods, based on iteration parameters. The derivation makes use of the inner product matrix B defining the CG method. In particular, the relationship between the eigenvalues and Bnorm of a matrix is investigated
Convergence Improvement of the Conjugate Gradient Iterative Method for Finite Element Simulations
"... Abstract —The slow convergence of the Incomplete Cholesky preconditioned Conjugate Gradient (CG) method, applied to solve the system representing a magnetostatic finite element model, is caused by the presence of a few little eigenvalues in the spectrum of the system matrix. The corresponding eigenv ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract —The slow convergence of the Incomplete Cholesky preconditioned Conjugate Gradient (CG) method, applied to solve the system representing a magnetostatic finite element model, is caused by the presence of a few little eigenvalues in the spectrum of the system matrix. The corresponding
LMSNEWTON ADAPTIVE FILTERING USING FFT–BASED CONJUGATE GRADIENT ITERATIONS ∗
"... Abstract. In this paper, we propose a new fast Fourier transform (FFT) based LMSNewton (LMSN) adaptive filter algorithm. At each adaptive time step t, the nthorder filter coefficients are updated by using the inverse of an nbyn Hermitian, positive definite, Toeplitz operator T (t). By applying t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
the cyclic displacement formula for the inverse of a Toeplitz operator, T (t) −1 can be constructed using the solution vector of the Toeplitz system T (t)u(t) =en, whereenis the last unit vector. We apply the FFT–based preconditioned conjugate gradient (PCG) method with the Toeplitz matrix T (t − 1
Supplementary Material for Conjugate Gradient Iterative Hard Thresholding: Observed Noise Stability for Compressed Sensing,
"... This document contains a representation of the full data generated for [1]. In [1] plots were selected to emphasize the most crucial information contained in the data. For completeness, this document includes all omitted plots. Figs. 1–6 present the 50 % recovery phase transition curves for the comp ..."
Abstract
 Add to MetaCart
This document contains a representation of the full data generated for [1]. In [1] plots were selected to emphasize the most crucial information contained in the data. For completeness, this document includes all omitted plots. Figs. 1–6 present the 50 % recovery phase transition curves for the compressed sensing problem to show the smooth decrease in the recovery region for all algorithms. Figures 7–24, labeled Full data in the list of figures, provide all data for each problem class tested: the 50 % recovery phase transition curves for all algorithms, an algorithm selection map identifying the algorithm with minimum average recovery time among all algorithms tested, the minimum average recovery time, and a ratio of the average recovery time for each algorthm compared to the minimum average recovery time among all algorithms tested. For a more detailed view of the recovery performance for all values of ρ in the phase transition region, the full data also contains semilog plots of the average computational times for successful recovery for the two values of δ which are closest to 0.1 and 0.3. Consider y = Ax + e where x ∈ R n is ksparse (i.e. the number of nonzeros entries in x is at most k, denoted ‖x‖0 ≤ k), A ∈ R m×n and e ∈ R m representing model misfit between representing y with k columns of A and/or additive noise. The compressed sensing recovery question asks one to identify the minimizer ˆx = arg min ‖y − Az‖2 subject to ‖z‖0 ≤ k. (1) z∈Rn The rowsparse approximation problem extends the compressed sensing problem to consider Y = AX +E where X ∈ R n×r
BLANCHARD, TANNER, AND WEI: CGIHT: OBSERVED NOISE STABILITY 1 Conjugate Gradient Iterative Hard Thresholding: Observed Noise Stability for Compressed Sensing
"... (CGIHT) for compressed sensing combines the low per iteration computational cost of simple line search iterative hard thresholding algorithms with the improved convergence rates of more sophisticated sparse approximation algorithms. This article shows that the average case performance of CGIHT is ro ..."
Abstract
 Add to MetaCart
(CGIHT) for compressed sensing combines the low per iteration computational cost of simple line search iterative hard thresholding algorithms with the improved convergence rates of more sophisticated sparse approximation algorithms. This article shows that the average case performance of CGIHT
Large steps in cloth simulation
 SIGGRAPH 98 Conference Proceedings
, 1998
"... The bottleneck in most cloth simulation systems is that time steps must be small to avoid numerical instability. This paper describes a cloth simulation system that can stably take large time steps. The simulation system couples a new technique for enforcing constraints on individual cloth particle ..."
Abstract

Cited by 576 (5 self)
 Add to MetaCart
as well. The implicit integration method generates a large, unbanded sparse linear system at each time step which is solved using a modified conjugate gradient method that simultaneously enforces particles ’ constraints. The constraints are always maintained exactly, independent of the number of conjugate
Results 1  10
of
5,292