Results 1  10
of
155,472
A Different Approach To Bounding The Minimal Residual Norm In Krylov Methods
, 1998
"... In the context of Krylov methods for solving systems of linear equations, expressions and bounds are derived for the norm of the minimal residual, like the one produced by GMRES or MINRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned. In the con ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In the context of Krylov methods for solving systems of linear equations, expressions and bounds are derived for the norm of the minimal residual, like the one produced by GMRES or MINRES. It is shown that the minimal residual norm is large as long as the Krylov basis is well
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems
 SIAM J. SCI. STAT. COMPUT
, 1986
"... We present an iterative method for solving linear systems, which has the property ofminimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an l2orthogonal basis of Krylov subspaces. It can be considered a ..."
Abstract

Cited by 2046 (40 self)
 Add to MetaCart
We present an iterative method for solving linear systems, which has the property ofminimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an l2orthogonal basis of Krylov subspaces. It can be considered
Simple "ResidualNorm" Based Algorithms 303
, 1998
"... drawbacks such as local convergence, being sensitive to the initial guess of solution, and the timepenalty involved in finding the inversion of the Jacobian matrix ∂Fi/∂x j. Basedon an invariant manifold defined in the space of (x, t) in terms of the residualnorm of the vector F(x), we can deriv ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
drawbacks such as local convergence, being sensitive to the initial guess of solution, and the timepenalty involved in finding the inversion of the Jacobian matrix ∂Fi/∂x j. Basedon an invariant manifold defined in the space of (x, t) in terms of the residualnorm of the vector F(x), we can
Does Social Capital Have an Economic Payoff? A CrossCountry Investigation
 Quarterly Journal of Economics
, 1997
"... This paper presents evidence that “social capital ” matters for measurable economic performance, using indicators of trust and civic norms from the World Values Surveys for a sample of 29 market economies. Memberships in formal groups—Putnam’s measure of social capital—is not associated with trust o ..."
Abstract

Cited by 1335 (8 self)
 Add to MetaCart
This paper presents evidence that “social capital ” matters for measurable economic performance, using indicators of trust and civic norms from the World Values Surveys for a sample of 29 market economies. Memberships in formal groups—Putnam’s measure of social capital—is not associated with trust
The Dantzig Selector: Statistical Estimation When p Is Much Larger Than n
, 2007
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Xβ + z, where β ∈ Rp is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n ≪ p ..."
Abstract

Cited by 877 (14 self)
 Add to MetaCart
, where r is the residual vector y − X ˜β and t is a positive scalar. We show that if X obeys a uniform uncertainty principle (with unitnormed columns) and if the true parameter vector β is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large
Why Do Some Countries Produce So Much More Output Per Worker Than Others?
, 1998
"... Output per worker varies enormously across countries. Why? On an accounting basis, our analysis shows that differences in physical capital and educational attainment can only partially explain the variation in output per worker — we find a large amount of variation in the level of the Solow residual ..."
Abstract

Cited by 2363 (22 self)
 Add to MetaCart
residual across countries. At a deeper level, we document that the differences in capital accumulation, productivity, and therefore output per worker are driven by differences in institutions and government policies, which we call social infrastructure. We treat social infrastructure as endogenous
Regression Shrinkage and Selection Via the Lasso
 Journal of the Royal Statistical Society, Series B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract

Cited by 4055 (51 self)
 Add to MetaCart
We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients
Stochastic Perturbation Theory
, 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract

Cited by 886 (35 self)
 Add to MetaCart
the variation in the perturbed quantity. Up to the higherorder terms that are ignored in the expansion, these statistics tend to be more realistic than perturbation bounds obtained in terms of norms. The technique is applied to a number of problems in matrix perturbation theory, including least squares
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 649 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugategradient algorithms, indicating that I~QR is the most reliable algorithm when A is illconditioned. Categories and Subject Descriptors: G.1.2 [Numerical Analysis]: ApprorJmationleast squares approximation; G.1.3 [Numerical Analysis]: Numerical Linear Algebralinear systems (direct and
Results 1  10
of
155,472