Results 1 - 10
of
101,925
Multivariate adaptive regression splines
- The Annals of Statistics
, 1991
"... A new method is presented for flexible regression modeling of high dimensional data. The model takes the form of an expansion in product spline basis functions, where the number of basis functions as well as the parameters associated with each one (product degree and knot locations) are automaticall ..."
Abstract
-
Cited by 700 (2 self)
- Add to MetaCart
A new method is presented for flexible regression modeling of high dimensional data. The model takes the form of an expansion in product spline basis functions, where the number of basis functions as well as the parameters associated with each one (product degree and knot locations
Projection Pursuit Regression
- Journal of the American Statistical Association
, 1981
"... A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general- smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures, ..."
Abstract
-
Cited by 550 (6 self)
- Add to MetaCart
A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general- smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures
Instrumental Variables Regression with Weak Instruments
- ECONOMETRICA
, 1997
"... ... The theory suggests concrete guidelines for applied work, including using nonstandard methods for construction of confidence regions. These results are used to interpret Angrist and Krueger's (1991) estimates of the returns to education: whereas TSLS estimates with many instruments approac ..."
Abstract
-
Cited by 1691 (15 self)
- Add to MetaCart
... The theory suggests concrete guidelines for applied work, including using nonstandard methods for construction of confidence regions. These results are used to interpret Angrist and Krueger's (1991) estimates of the returns to education: whereas TSLS estimates with many instruments
A tutorial on support vector regression
, 2004
"... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..."
Abstract
-
Cited by 865 (3 self)
- Add to MetaCart
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing
Regression Shrinkage and Selection Via the Lasso
- JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract
-
Cited by 4212 (49 self)
- Add to MetaCart
an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.
Additive Logistic Regression: a Statistical View of Boosting
- Annals of Statistics
, 1998
"... Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input dat ..."
Abstract
-
Cited by 1750 (25 self)
- Add to MetaCart
Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input
Greedy Function Approximation: A Gradient Boosting Machine
- Annals of Statistics
, 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract
-
Cited by 1000 (13 self)
- Add to MetaCart
for additive expansions based on any tting criterion. Specic algorithms are presented for least{squares, least{absolute{deviation, and Huber{M loss functions for regression, and multi{class logistic likelihood for classication. Special enhancements are derived for the particular case where the individual
Pseudo-Random Generation from One-Way Functions
- PROC. 20TH STOC
, 1988
"... Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a one-way function from a pseudorandom generator, this result shows that there is a pseudorandom gene ..."
Abstract
-
Cited by 861 (18 self)
- Add to MetaCart
Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a one-way function from a pseudorandom generator, this result shows that there is a pseudorandom
Graph-based algorithms for Boolean function manipulation
- IEEE TRANSACTIONS ON COMPUTERS
, 1986
"... In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on th ..."
Abstract
-
Cited by 3526 (46 self)
- Add to MetaCart
on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
, 2001
"... Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized ..."
Abstract
-
Cited by 948 (62 self)
- Add to MetaCart
Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized
Results 1 - 10
of
101,925