Results 1  10
of
153
Wavelet Thresholding via a Bayesian Approach
 J. R. STATIST. SOC. B
, 1996
"... We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in nonparametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion common to most applications. ..."
Abstract

Cited by 201 (27 self)
 Add to MetaCart
We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in nonparametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion common to most applications. For the prior specified, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any specific Besov space. We establish a relation between the hyperparameters of the prior model and the parameters of those Besov spaces within which realizations from the prior will fall. Such a relation gives insight into the meaning of the Besov space parameters. Moreover, the established relation makes it possible in principle to incorporate prior knowledge about the function's regularity properties into the prior model for its wavelet coefficients. However, prior knowledge about a function's regularity properties might b...
Prediction With Gaussian Processes: From Linear Regression To Linear Prediction And Beyond
 Learning and Inference in Graphical Models
, 1997
"... The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. Th ..."
Abstract

Cited by 195 (4 self)
 Add to MetaCart
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems. PREDICTION WITH GAUSSIAN PROCESSES: FROM LINEAR REGRESSION TO LINEAR PREDICTION AND BEYOND 2 1 Introduction In the last decade neural networks have been used to tackle regression and classification problems, with some notable successes. It has also been widely recognized that they form a part of a wide variety of nonlinear statistical techniques that can be used for...
Flexible smoothing with Bsplines and penalties
 Statistical Science
, 1996
"... Bsplines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots ..."
Abstract

Cited by 179 (3 self)
 Add to MetaCart
Bsplines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots and a difference penalty on coefficients of adjacent Bsplines. We show connections to the familiar spline penalty on the integral of the squared second derivative. A short overview of Bsplines, their construction, and penalized likelihood is presented. We discuss properties of penalized Bsplines and propose various criteria for the choice of an optimal penalty parameter. Nonparametric logistic regression, density estimation and scatterplot smoothing are used as examples. Some details of the computations are presented. Keywords: Generalized linear models, smoothing, nonparametric models, splines, density estimation. Address for correspondence: DCMR Milieudienst Rijnmond, 'sGravelandse...
Local Regression: Automatic Kernel Carpentry
 Statistical Science
, 1993
"... . A kernel smoother is an intuitive estimate of a regression function or conditional expectation; at each point x 0 the estimate of E(Y j x 0 ) is a weighted mean of the sample Y i , with observations close to x 0 receiving the largest weights. Unfortunately this simplicity has flaws. At the boundar ..."
Abstract

Cited by 108 (2 self)
 Add to MetaCart
. A kernel smoother is an intuitive estimate of a regression function or conditional expectation; at each point x 0 the estimate of E(Y j x 0 ) is a weighted mean of the sample Y i , with observations close to x 0 receiving the largest weights. Unfortunately this simplicity has flaws. At the boundary of the predictor space, the kernel neighborhood is asymmetric and the estimate may have substantial bias. Bias can be a problem in the interior as well if the predictors are nonuniform or if the regression function has substantial curvature. These problems are particularly severe when the predictors are multidimensional. A variety of kernel modifications have been proposed to provide approximate and asymptotic adjustment for these biases. Such methods generally place substantial restrictions on the regression problems that can be considered; in unfavorable situations, they can perform very poorly. Moreover, the necessary modifications are very difficult to implement in the multidimensional...
A unifying view of sparse approximate Gaussian process regression
 Journal of Machine Learning Research
, 2005
"... We provide a new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression. Our approach relies on expressing the effective prior which the methods are using. This allows new insights to be gained, and highlights the relationship between existin ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
We provide a new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression. Our approach relies on expressing the effective prior which the methods are using. This allows new insights to be gained, and highlights the relationship between existing methods. It also allows for a clear theoretically justified ranking of the closeness of the known approximations to the corresponding full GPs. Finally we point directly to designs of new better sparse approximations, combining the best of the existing strategies, within attractive computational constraints.
Regularization of Wavelets Approximations
, 1999
"... this paper, weintroduce nonlinear regularized wavelet estimators for estimating nonparametric regression functions when sampling points are not uniformly spaced. The approach can apply readily to many other statistical contexts. Various new penalty functions are proposed. The hardthresholding and s ..."
Abstract

Cited by 82 (7 self)
 Add to MetaCart
this paper, weintroduce nonlinear regularized wavelet estimators for estimating nonparametric regression functions when sampling points are not uniformly spaced. The approach can apply readily to many other statistical contexts. Various new penalty functions are proposed. The hardthresholding and softthresholding estimators of Donoho and Johnstone (1994) are specic members of nonlinear regularized wavelet estimators. They correspond to the lower and upper bound of a class of the penalized leastsquares estimators. Necessary conditions for penalty functions are given for regularized estimators to possess thresholding properties. Oracle inequalities and universal thresholding parameters are obtained for a large class of penalty functions. The sampling properties of nonlinear regularized wavelet estimators are established, and are shown to be adaptively minimax. To eciently solve penalized leastsquares problems, Nonlinear Regularized Sobolev Interpolators (NRSI) are proposed as initial estimators, which are shown to have good sampling properties. The NRSI is further ameliorated by Regularized OneStep Estimators (ROSE), which are the onestep estimators of the penalized leastsquares problems using the NRSI as initial estimators. Two other approaches, the graduated nonconvexity algorithm and wavelet networks, are also introduced to handle penalized leastsquares problems. The newly introduced approaches are also illustrated by a few numerical examples. ####### ########## ## ########## ########### ## ############# ## ####### ######################### ##### ######## ##### ## ####### ######## ### ## ########## ########## ## ########### ########## ## ########### ### ######## ## ########## ### ### ####### ########## ## #### ##### ##### ########### ######### ######### ## ###...
Smoothing Spline Models for the Analysis of Nested and Crossed Samples of Curves
 Journal of the American Statistical Association
, 1998
"... We introduce a class of models for an additive decomposition of groups of curves strati ed by crossed and nested factors, generalizing smoothing splines to such samples by associating them with a corresponding mixed e ects model. The models are also useful for imputation of missing data and explorat ..."
Abstract

Cited by 80 (1 self)
 Add to MetaCart
We introduce a class of models for an additive decomposition of groups of curves strati ed by crossed and nested factors, generalizing smoothing splines to such samples by associating them with a corresponding mixed e ects model. The models are also useful for imputation of missing data and exploratory analysis of variance. We prove that the best linear unbiased predictors (BLUP) from the extended mixed e ects model correspond to solutions of a generalized penalized regression where smoothing parameters are directly related to variance components, and we show that these solutions are natural cubic splines. The model parameters are estimated using a highly e cient implementation of the EM algorithm for restricted maximum likelihood (REML) estimation based on a preliminary eigenvector decomposition. Variability of computed estimates can be assessed with asymptotic techniques or with a novel hierarchical bootstrap resampling scheme for nested mixed e ects models. Our methods are applied to menstrual cycle data from studies of reproductive function that measure daily urinary progesterone; the sample of progesterone curves is strati ed by cycles nested within subjects nested within conceptive and nonconceptive groups.
Extending the Scope of Wavelet Regression Methods By CoefficientDependent Thresholding
, 1998
"... Various aspects of the wavelet approach to nonparametric regression are considered, with the overall aim of extending the scope of wavelet techniques, to irregularlyspaced data, to regularlyspaced data sets of arbitrary size, to heteroscedastic and correlated data, and to data some of which may be ..."
Abstract

Cited by 51 (5 self)
 Add to MetaCart
Various aspects of the wavelet approach to nonparametric regression are considered, with the overall aim of extending the scope of wavelet techniques, to irregularlyspaced data, to regularlyspaced data sets of arbitrary size, to heteroscedastic and correlated data, and to data some of which may be downweighted or omitted as outliers. At the core of the methodology discussed is the following problem: if a sequence has a given covariance structure, what is the variance and covariance structure of its discrete wavelet transform? For sequences whose length is a power of 2, an algorithm for finding all the variances and withinlevel covariances in the wavelet table is developed and investigated in detail. In particular, it is shown that if the original sequence has bandlimited covariance matrix, then the time required by the algorithm is linear in the length of the sequence. Up to now, most statistical work on wavelet methods presumes that the number of observations is a power of 2 and t...
Generalized linear models with functional predictors
 Journal of the Royal Statistical Society, Series B
, 2002
"... In this paper we present a technique for extending generalized linear models (GLM) to the situation where some of the predictor variables are observations from a curve or function. The technique is particularly useful when only fragments of each curve have been observed. We demonstrate, on both simu ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
In this paper we present a technique for extending generalized linear models (GLM) to the situation where some of the predictor variables are observations from a curve or function. The technique is particularly useful when only fragments of each curve have been observed. We demonstrate, on both simulated and real world data sets, how this approach can be used to perform linear, logistic and censored regression with functional predictors. In addition, we show how functional principal components can be used to gain insight into the relationship between the response and functional predictors. Finally, we extend the methodology to apply GLM and principal components to standard missing data problems.