• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties (2001)

Cached

  • Download as a PDF

Download Links

  • [www.stat.psu.edu]
  • [sites.stat.psu.edu]
  • [www.personal.psu.edu]
  • [www-personal.umich.edu]
  • [www.unc.edu]
  • [www.orfe.princeton.edu]
  • [www.stat.unc.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Jianqing Fan , Runze Li
Citations:942 - 61 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Fan01variableselection,
    author = {Jianqing Fan and Runze Li},
    title = {Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties},
    year = {2001}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of parametric models such as generalized linear models and robust regression models. They can also be applied easily to nonparametric modeling by using wavelets and splines. Rates of convergence of the proposed penalized likelihood estimators are established. Furthermore, with proper choice of regularization parameters, we show that the proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well as if the correct submodel were known. Our simulation shows that the newly proposed methods compare favorably with other variable selection techniques. Furthermore, the standard error formulas are tested to be accurate enough for practical applications.

Keyphrases

variable selection    nonconcave penalized likelihood    oracle property    penalty function    variable selection technique    high-dimensional statistical modeling    likelihood function    oracle procedure    likelihood estimator    proper choice    generalized linear model    nonparametric regression    parametric model    satisfy certain condition    method compare    regularization parameter    sparse solution    method select variable    practical application    estimate coefficient    continuous solution    variable selection process    likelihood approach    robust regression model    stepwise selection procedure    many approach    stochastic error    standard error formula    correct submodel    new algorithm    confidence interval   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University