Results 1  10
of
77
Prediction With Gaussian Processes: From Linear Regression To Linear Prediction And Beyond
 Learning and Inference in Graphical Models
, 1997
"... The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. Th ..."
Abstract

Cited by 231 (4 self)
 Add to MetaCart
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems. PREDICTION WITH GAUSSIAN PROCESSES: FROM LINEAR REGRESSION TO LINEAR PREDICTION AND BEYOND 2 1 Introduction In the last decade neural networks have been used to tackle regression and classification problems, with some notable successes. It has also been widely recognized that they form a part of a wide variety of nonlinear statistical techniques that can be used for...
Support Vector Machines, Reproducing Kernel Hilbert Spaces and the Randomized GACV
, 1998
"... this paper we very briefly review some of these results. RKHS can be chosen tailored to the problem at hand in many ways, and we review a few of them, including radial basis function and smoothing spline ANOVA spaces. Girosi (1997), Smola and Scholkopf (1997), Scholkopf et al (1997) and others have ..."
Abstract

Cited by 187 (12 self)
 Add to MetaCart
this paper we very briefly review some of these results. RKHS can be chosen tailored to the problem at hand in many ways, and we review a few of them, including radial basis function and smoothing spline ANOVA spaces. Girosi (1997), Smola and Scholkopf (1997), Scholkopf et al (1997) and others have noted the relationship between SVM's and penalty methods as used in the statistical theory of nonparametric regression. In Section 1.2 we elaborate on this, and show how replacing the likelihood functional of the logit (log odds ratio) in penalized likelihood methods for Bernoulli [yesno] data, with certain other functionals of the logit (to be called SVM functionals) results in several of the SVM's that are of modern research interest. The SVM functionals we consider more closely resemble a "goodnessoffit" measured by classification error than a "goodnessoffit" measured by the comparative KullbackLiebler distance, which is frequently associated with likelihood functionals. This observation is not new or profound, but it is hoped that the discussion here will help to bridge the conceptual gap between classical nonparametric regression via penalized likelihood methods, and SVM's in RKHS. Furthermore, since SVM's can be expected to provide more compact representations of the desired classification boundaries than boundaries based on estimating the logit by penalized likelihood methods, they have potential as a prescreening or model selection tool in sifting through many variables or regions of attribute space to find influential quantities, even when the ultimate goal is not classification, but to understand how the logit varies as the important variables change throughout their range. This is potentially applicable to the variable/model selection problem in demographic m...
Smoothing Spline ANOVA for Exponential Families, with Application to the Wisconsin Epidemiological Study of Diabetic Retinopathy
 ANN. STATIST
, 1995
"... Let y i ; i = 1; \Delta \Delta \Delta ; n be independent observations with the density of y i of the form h(y i ; f i ) = exp[y i f i \Gammab(f i )+c(y i )], where b and c are given functions and b is twice continuously differentiable and bounded away from 0. Let f i = f(t(i)), where t = (t 1 ; \De ..."
Abstract

Cited by 101 (46 self)
 Add to MetaCart
Let y i ; i = 1; \Delta \Delta \Delta ; n be independent observations with the density of y i of the form h(y i ; f i ) = exp[y i f i \Gammab(f i )+c(y i )], where b and c are given functions and b is twice continuously differentiable and bounded away from 0. Let f i = f(t(i)), where t = (t 1 ; \Delta \Delta \Delta ; t d ) 2 T (1)\Omega \Delta \Delta \Delta\Omega T (d) = T , the T (ff) are measureable spaces of rather general form, and f is an unknown function on T with some assumed `smoothness' properties. Given fy i ; t(i); i = 1; \Delta \Delta \Delta ; ng, it is desired to estimate f(t) for t in some region of interest contained in T . We develop the fitting of smoothing spline ANOVA models to this data of the form f(t) = C + P ff f ff (t ff ) + P ff!fi f fffi (t ff ; t fi ) + \Delta \Delta \Delta. The components of the decomposition satisfy side conditions which generalize the usual side conditions for parametric ANOVA. The estimate of f is obtained as the minimizer...
A Computationally Efficient Superresolution Image Reconstruction Algorithm
, 2000
"... Superresolution reconstruction produces a highresolution image from a set of lowresolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this illconditioned and typically underdetermined large scale problem. We propo ..."
Abstract

Cited by 72 (4 self)
 Add to MetaCart
(Show Context)
Superresolution reconstruction produces a highresolution image from a set of lowresolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this illconditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonovregularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized crossvalidation method for automatic calculation of regularization parameters. Effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.
Smoothing spline ANOVA models for large data sets with Bernoulli observations and the randomized GACV
 Ann. Statist
"... (ranGACV) method for choosing multiple smoothing parameters in penalized likelihood estimates for Bernoulli data. The method is intended for application with penalized likelihood smoothing spline ANOVA models. In addition we propose a class of approximate numerical methods for solving the penalized ..."
Abstract

Cited by 53 (24 self)
 Add to MetaCart
(ranGACV) method for choosing multiple smoothing parameters in penalized likelihood estimates for Bernoulli data. The method is intended for application with penalized likelihood smoothing spline ANOVA models. In addition we propose a class of approximate numerical methods for solving the penalized likelihood variational problem which, in conjunction with the ranGACV method allows the application of smoothing spline ANOVA models with Bernoulli data to much larger data sets than previously possible. These methods are based on choosing an approximating subset of the natural (representer) basis functions for the variational problem. Simulation studies with synthetic data, including synthetic data mimicking demographic risk factor data sets is used to examine the properties of the method and to compare the approach with the GRKPACK code of Wang (1997c). Bayesian “confidence intervals ” are obtained for the fits and are shown in the simulation studies to have the “across the function ” property usually claimed for these confidence intervals. Finally the method is applied
MonteCarlo Sure: A blackbox optimization of regularization parameters for general denoising algorithms
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2008
"... We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein’s unbiased risk estimate (SURE) which provides a means of assessing the true meansquared error (MSE) pure ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein’s unbiased risk estimate (SURE) which provides a means of assessing the true meansquared error (MSE) purely from the measured data without need for any knowledge about the noisefree signal. Specifically, we present a novel MonteCarlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a blackbox approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SUREbased optimization of a series of popular imagedenoising algorithms such as totalvariation denoising, wavelet softthresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases.
Efficient generalized crossvalidation with applications to parametric image restoration and resolution enhancement
 IEEE Trans. Image Processing
, 2001
"... Abstract—In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this illposed class of inverse problem from raw da ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
(Show Context)
Abstract—In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this illposed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized crossvalidation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Datadriven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method. Index Terms—Blind restoration, blur identification, generalized crossvalidation, quadrature rules, superresolution. I.
Some largescale matrix computation problems
, 1996
"... There are numerous applications in physics, statistics and electrical circuit simulation where it is required to bound entries and the trace of the inverse and the determinant of a large sparse matrix. All these computational tasks are related to the central mathematical problem studied in this pape ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
There are numerous applications in physics, statistics and electrical circuit simulation where it is required to bound entries and the trace of the inverse and the determinant of a large sparse matrix. All these computational tasks are related to the central mathematical problem studied in this paper, namely, bounding the bilinear form uXf(A)v for a given matrix A and vectors u and v, wherefis a given smooth function and is defined on the spectrum of A. We will study a practical numerical algorithm for bounding the bilinear form, where the matrix A is only referenced through matrixvector multiplications. A Monte Carlo method is also presented to efficiently estimate the trace of the inverse and the determinant of a large sparse matrix.
Adaptive Tuning of Numerical Weather Prediction Models: Simultaneous Estimation of Weighting, Smoothing and Physical Parameters
, 1996
"... In Wahba et al (1995) it was shown how the randomized trace method could be used to adaptively tune numerical weather prediction models via generalized cross validation (GCV ) and related methods. In this paper a `toy' four dimensional data assimilation model is developed (actually one space an ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
In Wahba et al (1995) it was shown how the randomized trace method could be used to adaptively tune numerical weather prediction models via generalized cross validation (GCV ) and related methods. In this paper a `toy' four dimensional data assimilation model is developed (actually one space and one time variable), consisting of an equivalent barotropic vorticity equation on a latitude circle, and used to demonstrate how this technique may be used to simultaneously tune weighting, smoothing and physical parameters. Analyses both with the model as a strong constraint (corresponding to the usual 4DVar approach) and as a weak constraint (corresponding theoretically to a fixedinterval Kalman smoother) are carried out. The conclusions are limited to the particular toy problem considered but it can be seen how more elaborate experiments could be carried out, as well as how the method might be applied in practice. We have considered five adjustable parameters, two related to a distributed c...
Tikhonov Regularization for Large Scale Problems
, 1997
"... Tikhonov regularization is a powerful tool for the solution of illposed linear systems and linear least squares problems. The choice of the regularization parameter is a crucial step, and many methods have been proposed for this purpose. However, efficient and reliable methods for large scale pro ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Tikhonov regularization is a powerful tool for the solution of illposed linear systems and linear least squares problems. The choice of the regularization parameter is a crucial step, and many methods have been proposed for this purpose. However, efficient and reliable methods for large scale problems are still missing. In this paper approximation techniques based on the Lanczos algorithm and the theory of Gauss quadrature are proposed to reduce the computational complexity for large scale problems. The new approach is applied to 5 different heuristics: Morozov's discrepancy principle, the Gfrerer/Rausmethod, the quasioptimality criterion, generalized crossvalidation, and the Lcurve criterion. Numerical experiments are used to determine the efficiency and robustness of the various methods.