Results 1  10
of
15
Regularization tools – a matlab package for analysis and solution of discrete illposed problems
 Numerical Algorithms
, 1994
"... The software described in this report was originally published in Numerical Algorithms 6 (1994), pp. 1–35. The current version is published in Numer. Algo. 46 (2007), pp. 189–194, and it is available from www.netlib.org/numeralgo and www.mathworks.com/matlabcentral/fileexchangeContents ..."
Abstract

Cited by 192 (8 self)
 Add to MetaCart
The software described in this report was originally published in Numerical Algorithms 6 (1994), pp. 1–35. The current version is published in Numer. Algo. 46 (2007), pp. 189–194, and it is available from www.netlib.org/numeralgo and www.mathworks.com/matlabcentral/fileexchangeContents
The FourierSeries Method For Inverting Transforms Of Probability Distributions
, 1991
"... This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remarkably easy ..."
Abstract

Cited by 149 (51 self)
 Add to MetaCart
This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remarkably easy to use, requiring programs of less than fifty lines. The Fourierseries method can be interpreted as numerically integrating a standard inversion integral by means of the trapezoidal rule. The same formula is obtained by using the Fourier series of an associated periodic function constructed by aliasing; this explains the name of the method. This Fourier analysis applies to the inversion problem because the Fourier coefficients are just values of the transform. The mathematical centerpiece of the Fourierseries method is the Poisson summation formula, which identifies the discretization error associated with the trapezoidal rule and thus helps bound it. The greatest difficulty is approximately calculating the infinite series obtained from the inversion integral. Within this framework, lattice cdf's can be calculated from generating functions by finite sums without truncation. For other cdf's, an appropriate truncation of the infinite series can be determined from the transform based on estimates or bounds. For Laplace transforms, the numerical integration can be made to produce a nearly alternating series, so that the convergence can be accelerated by techniques such as Euler summation. Alternatively, the cdf can be perturbed slightly by convolution smoothing or windowing to produce a truncation error bound independent of the original cdf. Although error bounds can be determined, an effective approach is to use two different methods without elaborate error analysis. For this...
Choosing regularization parameters in iterative methods for illposed problems
 SIAM J. MATRIX ANAL. APPL
, 2001
"... Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even smaller dimensional space, via iterative methods based on Krylov subspaces. In this work we present a common framework for efficient algorithms that regularize after this second projection rather than before it. We show that determining regularization parameters based on the final projectedproblem rather than on the original discretization has firmer justification andoften involves less computational expense. We prove some results on the approximate equivalence of this approach to other forms of regularization, andwe present numerical examples.
Tikhonov Regularization for Large Scale Problems
, 1997
"... Tikhonov regularization is a powerful tool for the solution of illposed linear systems and linear least squares problems. The choice of the regularization parameter is a crucial step, and many methods have been proposed for this purpose. However, efficient and reliable methods for large scale pro ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Tikhonov regularization is a powerful tool for the solution of illposed linear systems and linear least squares problems. The choice of the regularization parameter is a crucial step, and many methods have been proposed for this purpose. However, efficient and reliable methods for large scale problems are still missing. In this paper approximation techniques based on the Lanczos algorithm and the theory of Gauss quadrature are proposed to reduce the computational complexity for large scale problems. The new approach is applied to 5 different heuristics: Morozov's discrepancy principle, the Gfrerer/Rausmethod, the quasioptimality criterion, generalized crossvalidation, and the Lcurve criterion. Numerical experiments are used to determine the efficiency and robustness of the various methods.
Generalized CrossValidation for Large Scale Problems
 J. Comput. Graph. Stat
, 1995
"... . Although generalized crossvalidation is a popular tool for calculating a regularization parameter it has been rarely applied to large scale problems until recently. A major difficulty lies in the evaluation of the crossvalidation function which requires the calculation of the trace of an inverse ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
. Although generalized crossvalidation is a popular tool for calculating a regularization parameter it has been rarely applied to large scale problems until recently. A major difficulty lies in the evaluation of the crossvalidation function which requires the calculation of the trace of an inverse matrix. In the last few years stochastic trace estimators have been proposed to alleviate this problem. In this paper numerical approximation techniques are used to further reduce the computational complexity. The new approach employs Gauss quadrature to compute lower and upper bounds on the crossvalidation function. It only requires the operator form of the system matrix, i.e., a subroutine to evaluate matrixvector products. Thus the factorization of large matrices can be avoided. The new approach has been implemented in MATLAB. Numerical experiments confirm the remarkable accuracy of the stochastic trace estimator. Regularization parameters are computed for illposed problems with 100, ...
Finding a global optimal solution for a quadratically constrained fractional quadratic problem with applications to the regularized total least squares
 SIAM J. Matrix Anal. Appl
"... Abstract. We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a twosided quadratic form constraint. This formulation is motivated by the socalled regularized total least squares (RTLS) problem. A key difficulty ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Abstract. We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a twosided quadratic form constraint. This formulation is motivated by the socalled regularized total least squares (RTLS) problem. A key difficulty with this problem is its nonconvexity, and all current known methods to solve it are guaranteed only to converge to a point satisfying first order necessary optimality conditions. We prove that a global optimal solution to this problem can be found by solving a sequence of very simple convex minimization problems parameterized by a single parameter. As a result, we derive an efficient algorithm that produces an ɛglobal optimal solution in a computational effort of O(n3 log ɛ−1). The algorithm is tested on problems arising from the inverse Laplace transform and image deblurring. Comparison to other wellknown RTLS solvers illustrates the attractiveness of our new method. Key words. regularized total least squares, fractional programming, nonconvex quadratic optimization, convex programming
Lcurves and discrete illposed problems
 BIT
"... The GMRES method is a popular iterative method for the solution of large linear systems of equations with a nonsymmetric nonsingular matrix. This paper discusses application of the GMRES method to the solution of large linear systems of equations that arise from the discretization of linear illpose ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
The GMRES method is a popular iterative method for the solution of large linear systems of equations with a nonsymmetric nonsingular matrix. This paper discusses application of the GMRES method to the solution of large linear systems of equations that arise from the discretization of linear illposed problems. These linear systems are severely illconditioned and are referred to as discrete illposed problems. We are concerned with the situation when the righthand side vector is contaminated by measurement errors, and we discuss how a meaningful approximate solution of the discrete illposed problem can be determined by early termination of the iterations with the GMRES method. We propose a termination criterion based on the condition number of the projected matrices defined by the GMRES method. Under certain conditions on the linear system, the termination index corresponds to the “vertex ” of an Lshaped curve.
CauchyLike Preconditioners For 2Dimensional IllPosed Problems
, 1997
"... . Illconditioned matrices with block Toeplitz, Toeplitz block (BTTB) structure arise from the discretization of certain illposed problems in signal and image processing. We use a preconditioned conjugate gradient algorithm to compute a regularized solution to this linear system given noisy data. O ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
. Illconditioned matrices with block Toeplitz, Toeplitz block (BTTB) structure arise from the discretization of certain illposed problems in signal and image processing. We use a preconditioned conjugate gradient algorithm to compute a regularized solution to this linear system given noisy data. Our preconditioner is a Cauchylike block diagonal approximation to an orthogonal transformation of the BTTB matrix. We show the preconditioner has desirable properties when the kernel of the illposed problem is smooth: the largest singular values of the preconditioned matrix are clustered around one, the smallest singular values remain small, and the subspaces corresponding to the largest and smallest singular values, respectively, remain unmixed. For a system involving np variables, the preconditioned algorithm costs only O(np(lg n + lg p)) operations per iteration. We demonstrate the effectiveness of the preconditioner on three examples. Key words. Regularization, illposed problems, To...
Feature Extraction for Image Superresolution using Finite Rate of Innovation Principles
, 2008
"... I certify that this thesis, and the research to which it refers, are the product of my own work, and that any ideas or quotations from the work of other people, published or otherwise, are fully acknowledged in accordance with the standard referencing practices of the discipline. I acknowledge the h ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
I certify that this thesis, and the research to which it refers, are the product of my own work, and that any ideas or quotations from the work of other people, published or otherwise, are fully acknowledged in accordance with the standard referencing practices of the discipline. I acknowledge the helpful guidance and support of my supervisor, Dr. Pier Luigi Dragotti. The material of this thesis has not been submitted for any degree at any other academic or professional institution.
Anderssen, The tradeoff between regularity and stability in Tichonov regularization
 Math. of Comp
, 1997
"... Abstract. When deriving rates of convergence for the approximations generated by the application of Tikhonov regularization to ill–posed operator equations, assumptions must be made about the nature of the stabilization (i.e., the choice of the seminorm in the Tikhonov regularization) and the regula ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. When deriving rates of convergence for the approximations generated by the application of Tikhonov regularization to ill–posed operator equations, assumptions must be made about the nature of the stabilization (i.e., the choice of the seminorm in the Tikhonov regularization) and the regularity of the least squares solutions which one looks for. In fact, it is clear from works of Hegland, Engl and Neubauer and Natterer that, in terms of the rate of convergence, there is a trade–off between stabilization and regularity. It is this matter which is examined in this paper by means of the best–possible worst–error estimates. The results of this paper provide better estimates than those of Engl and Neubauer, and also include and extend the best possible rate derived by Natterer. The paper concludes with an application of these results to first–kind integral equations with smooth kernels. 1.