Results 1  10
of
121
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
, 2001
"... Variable selection is fundamental to highdimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized ..."
Abstract

Cited by 346 (27 self)
 Add to MetaCart
Variable selection is fundamental to highdimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of parametric models such as generalized linear models and robust regression models. They can also be applied easily to nonparametric modeling by using wavelets and splines. Rates of convergence of the proposed penalized likelihood estimators are established. Furthermore, with proper choice of regularization parameters, we show that the proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well as if the correct submodel were known. Our simulation shows that the newly proposed methods compare favorably with other variable selection techniques. Furthermore, the standard error formulas are tested to be accurate enough for practical applications.
Large Sample Sieve Estimation of SemiNonparametric Models
 Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract

Cited by 92 (17 self)
 Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; seminonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in seminonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and nonnegativity. This chapter describes estimation of seminonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve Mestimates, pointwise normality of series estimates of regression functions, rootn asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
Hazard Regression
 Journal of the American Statistical Association
, 1995
"... An automatic procedure that uses linear splines and their tensor products is proposed for tting a regression model to data involving a polychotomous response variable and one or more predictors. The tted model can be used for multiple classi cation. The automatic tting procedure involves maximum lik ..."
Abstract

Cited by 80 (19 self)
 Add to MetaCart
An automatic procedure that uses linear splines and their tensor products is proposed for tting a regression model to data involving a polychotomous response variable and one or more predictors. The tted model can be used for multiple classi cation. The automatic tting procedure involves maximum likelihood estimation, stepwise addition, stepwise deletion, and model selection by AIC, crossvalidation or an independent test set. A modi ed version of the algorithm has been constructed that is applicable to large data sets, and it is illustrated using a phoneme recognition data set with 250,000 cases, 45 classes and 63 predictors.
Nonparametric Mixed Effects Models for Unequally Sampled Noisy Curves
 Biometrics
, 1998
"... We propose a method of analyzing collections of related curves in which the individual curves are modeled as spline functions with random coefficients. The method is applicable when the individual curves are sampled at variable and irregularly spaced points. This produces a low rank, low frequency a ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
We propose a method of analyzing collections of related curves in which the individual curves are modeled as spline functions with random coefficients. The method is applicable when the individual curves are sampled at variable and irregularly spaced points. This produces a low rank, low frequency approximation to the covariance structure, which can be estimated naturally by the EM algorithm. Smooth curves for individual trajectories are constructed as BLUP estimates, combining data from that individual and the entire collection. This framework leads naturally to methods for examining the effects of covariates on the shapes of the curves. We use model selection techniquesAIC, BIC, and crossvalidation to select the number of breakpoints for the spline approximation. We believe that the methodology we propose provides a simple, flexible, and computationally efficient means of functional data analysis. We illustrate it with two sets of data. 1 Introduction In recent years there ha...
Bayesian PSplines
 Journal of Computational and Graphical Statistics
, 2004
"... Psplines are an attractive approach for modelling nonlinear smooth effects of covariates within the generalized additive and varying coefficient models framework. In this paper we propose a Bayesian version for Psplines and generalize the approach for one dimensional curves to two dimensional surf ..."
Abstract

Cited by 67 (21 self)
 Add to MetaCart
Psplines are an attractive approach for modelling nonlinear smooth effects of covariates within the generalized additive and varying coefficient models framework. In this paper we propose a Bayesian version for Psplines and generalize the approach for one dimensional curves to two dimensional surface fitting for modelling interactions between metrical covariates. A Bayesian approach to Psplines has the advantage of allowing for simultaneous estimation of smooth functions and smoothing parameters. Moreover, it can easily be extended to more complex formulations, for example to mixed models with random effects for serially or spatially correlated response. Additionally, the assumption of constant smoothing parameters can be replaced by allowing the smoothing parameters to be locally adaptive. This is particularly useful in situations with changing curvature of the underlying smooth function or where the function is highly oscillating. Inference is fully Bayesian and uses recent MCMC techniques for drawing random samples from the posterior. In a couple of simulation studies the performance of Bayesian Psplines is studied and compared to other approaches in the literature. We illustrate the approach by a complex application on rents for flats in Munich.
Bivariate Quantile Smoothing Splines
 Biometrika
, 1998
"... It has long been recognized that the mean provides an inadequate summary while the set of quantiles can supply a more complete description of a sample. We introduce bivariate quantile smoothing splines, which belong to the space of bilinear tensorproduct splines, as nonparametric estimators for the ..."
Abstract

Cited by 58 (13 self)
 Add to MetaCart
It has long been recognized that the mean provides an inadequate summary while the set of quantiles can supply a more complete description of a sample. We introduce bivariate quantile smoothing splines, which belong to the space of bilinear tensorproduct splines, as nonparametric estimators for the conditional quantile functions in a two dimensional design space. The estimators can be computed using standard linear programming techniques and can further be used as building blocks for conditional quantile estimations in higher dimensions. For moderately large data sets, we recommend using penalized bivariate Bsplines as approximate solutions. We use real and simulated data to illustrate the proposed methodology. KEY WORDS: Conditional quantile; Linear program; Nonparametric regression; Robust regression; Schwarz information criterion; Tensorproduct spline. Xuming He is Associate Professor and Stephen Portnoy is Professor, Department of Statistics, University of Illinois, 725 S Wri...
A generalized approximate cross validation for smoothing splines with nonGaussian data’, Statistica Sinica 6
, 1996
"... Abstract: In this paper, we propose a Generalized Approximate Cross Validation (GACV) function for estimating the smoothing parameter in the penalized log likelihood regression problem with nonGaussian data. This GACV is obtained by, first, obtaining an approximation to the leavingoutone function ..."
Abstract

Cited by 53 (23 self)
 Add to MetaCart
Abstract: In this paper, we propose a Generalized Approximate Cross Validation (GACV) function for estimating the smoothing parameter in the penalized log likelihood regression problem with nonGaussian data. This GACV is obtained by, first, obtaining an approximation to the leavingoutone function based on the negative log likelihood, and then, in a step reminiscent of that used to get from leavingoutone cross validation to GCV in the Gaussian case, we replace diagonal elements of certain matrices by 1/n times the trace. A numerical simulation with Bernoulli data is used to compare the smoothing parameter λ chosen by this approximation procedure with the λ chosen from the two most often used algorithms based on the generalized cross validation procedure (O’Sullivan et al. (1986), Gu (1990, 1992)). In the examples here, the GACV estimate produces a better fit of the truth in term of minimizing the KullbackLeibler distance. Figures suggest that the GACV curve may be an approximately unbiased estimate of the KullbackLeibler distance in the Bernoulli data case; however, a theoretical proof is yet to be found.
Convergence rates of posterior distributions
 Ann. Statist
, 2000
"... We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, D ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinitedimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, logspline models, Dirichlet processes and interval censoring. 1. Introduction. Suppose
Selecting the Number of Knots For Penalized Splines
, 2000
"... Penalized splines, or Psplines, are regression splines fit by leastsquares with a roughness penaly. Psplines have much in common with smoothing splines, but the type of penalty used with a Pspline is somewhat more general than for a smoothing spline. Also, the number and location of the knots ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
Penalized splines, or Psplines, are regression splines fit by leastsquares with a roughness penaly. Psplines have much in common with smoothing splines, but the type of penalty used with a Pspline is somewhat more general than for a smoothing spline. Also, the number and location of the knots of a Pspline is not fixed as with a smoothing spline. Generally, the knots of a Pspline are at fixed quantiles of the independent variable and the only tuning parameter to choose is the number of knots. In this article, the effects of the number of knots on the performance of Psplines are studied. Two algorithms are proposed for the automatic selection of the number of knots. The myoptic algorithm stops when no improvement in the generalized cross validation statistic (GCV) is noticed with the last increase in the number of knots. The full search examines all candidates in a fixed sequence of possible numbers of knots and chooses the candidate that minimizes GCV. The myoptic algo...