Results 1 - 10
of
8,704
METHODS FOR CHOOSING THE REGULARIZATION PARAMETER
"... Many inverse problems arising in practice can be modelled in the form of an operator equation (1.1) Kf = g, where the function g: JRd ~ lR is known only as discrete noisy data y; = g(x;) + ti, ..."
Abstract
- Add to MetaCart
Many inverse problems arising in practice can be modelled in the form of an operator equation (1.1) Kf = g, where the function g: JRd ~ lR is known only as discrete noisy data y; = g(x;) + ti,
Iterative Evaluation of the Regularization Parameter in Regularized Image Restoration
- J. Vis. Commun. Image Represent
, 1992
"... this paper a nonlinear regularized iterative image restoration algorithm is proposed, according to which no prior knowledge about the noise variance is assumed. The algorithm results from a set-theoretic regularization approach, where bounds of the stabilizing functional and the noise variance, whic ..."
Abstract
-
Cited by 7 (4 self)
- Add to MetaCart
, which determine the regularization parameter, are updated at each iteration step. Sufficient conditions for the convergence of the algorithm, as well as an optimality criterion for the regularization parameter, are derived and experimental results are shown. c 1992 Academic Press, Inc
Regularized discriminant analysis
- J. Amer. Statist. Assoc
, 1989
"... Linear and quadratic discriminant analysis are considered in the small sample high-dimensional setting. Alternatives to the usual maximum likelihood (plug-in) estimates for the covariance matrices are proposed. These alternatives are characterized by two parameters, the values of which are customize ..."
Abstract
-
Cited by 468 (2 self)
- Add to MetaCart
Linear and quadratic discriminant analysis are considered in the small sample high-dimensional setting. Alternatives to the usual maximum likelihood (plug-in) estimates for the covariance matrices are proposed. These alternatives are characterized by two parameters, the values of which
Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods
- ADVANCES IN LARGE MARGIN CLASSIFIERS
, 1999
"... The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. Howev ..."
Abstract
-
Cited by 1051 (0 self)
- Add to MetaCart
The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score
Regularization Parameter Estimation for Feedforward Neural Networks
, 2002
"... Under the framework of the Kullback-Leibler distance, we show that a particular case of Gaussian probability function for feedforward neural networks reduces into the first order Tikhonov regularizer. The smooth parameter in kernel density estimation plays the role of the regularization parameter. U ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
Under the framework of the Kullback-Leibler distance, we show that a particular case of Gaussian probability function for feedforward neural networks reduces into the first order Tikhonov regularizer. The smooth parameter in kernel density estimation plays the role of the regularization parameter
Computation of Regularization Parameters using the Fourier Coefficients.
, 2009
"... In the solution of ill-posed problems by means of regularization methods, a crucial issue is the computation of the regularization parameter. In this work we focus on the Truncated Singular Value Decomposition (TSVD) and Tikhonov method and we define a method for computing the regularization paramet ..."
Abstract
- Add to MetaCart
In the solution of ill-posed problems by means of regularization methods, a crucial issue is the computation of the regularization parameter. In this work we focus on the Truncated Singular Value Decomposition (TSVD) and Tikhonov method and we define a method for computing the regularization
REGULARIZATION PARAMETER DETERMINATION FOR DISCRETE ILL-POSED PROBLEMSā
"... Abstract. Straightforward solution of discrete ill-posed linear systems of equations or least-squares problems with error contaminated data does not, in general, give meaningful results, because propagated error destroys the computed solution. The problems have to be modified to reduce their sensiti ..."
Abstract
- Add to MetaCart
their sensitivity to the error in the data. The amount of modification is determined by a regularization parameter. It can be difficult to determine a suitable value of the regularization parameter when no knowledge of the norm of error in the data is available. This paper proposes a new simple technique
Results 1 - 10
of
8,704