Results 1  10
of
41,832
Model Selection with CrossValidations and Bootstraps  Application to Time Series Prediction with RBFN Models
 Artificial Neural Networks and Neural Information Processing – ICANN/ICONIP 2003
, 2003
"... This paper compares several model selection methods, based on experimental estimates of their generalization errors. Experiments in the context of nonlinear time series prediction by RadialBasis Function Networks show the superiority of the bootstrap methodology over classical crossvalidations ..."
Abstract

Cited by 32 (16 self)
 Add to MetaCart
This paper compares several model selection methods, based on experimental estimates of their generalization errors. Experiments in the context of nonlinear time series prediction by RadialBasis Function Networks show the superiority of the bootstrap methodology over classical crossvalidations
An introduction to variable and feature selection
 Journal of Machine Learning Research
, 2003
"... Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. ..."
Abstract

Cited by 1283 (16 self)
 Add to MetaCart
Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available.
Regression Shrinkage and Selection Via the Lasso
 Journal of the Royal Statistical Society, Series B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract

Cited by 4055 (51 self)
 Add to MetaCart
that are exactly zero and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also
Least angle regression
 Ann. Statist
"... The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to s ..."
Abstract

Cited by 1308 (43 self)
 Add to MetaCart
to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm
The Jackknife and the Bootstrap for General Stationary Observations
, 1989
"... this paper we will always consider statistics TN of the form TN (X 1 ; :::; XN ) = T (ae ..."
Abstract

Cited by 399 (2 self)
 Add to MetaCart
this paper we will always consider statistics TN of the form TN (X 1 ; :::; XN ) = T (ae
CrossValidation
"... Introduction Crossvalidation is a resampling technique that is often used for the assessment of statistical models, as well as selection amongst competing model alternatives. Basically, it is a method to estimate the prediction error of statistical predictor functions. This technique can be very u ..."
Abstract
 Add to MetaCart
Introduction Crossvalidation is a resampling technique that is often used for the assessment of statistical models, as well as selection amongst competing model alternatives. Basically, it is a method to estimate the prediction error of statistical predictor functions. This technique can be very
Bagging Predictors
 Machine Learning
, 1996
"... Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making ..."
Abstract

Cited by 3574 (1 self)
 Add to MetaCart
bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability
USER ACCEPTANCE OF INFORMATION TECHNOLOGY: TOWARD A UNIFIED VIEW
, 2003
"... Information technology (IT) acceptance research has yielded many competing models, each with different sets of acceptance determinants. In this paper, we (1) review user acceptance literature and discuss eight prominent models, (2) empirically compare the eight models and their extensions, (3) formu ..."
Abstract

Cited by 1665 (9 self)
 Add to MetaCart
) formulate a unified model that integrates elements across the eight models, and (4) empirically validate the unified model. The eight models reviewed are the theory of reasoned action, the technology acceptance model, the motivational model, the theory of planned behavior, a model combining the technology
How much should we trust differencesindifferences estimates? Quarterly Journal of Economics 119:249–75
, 2004
"... Most papers that employ DifferencesinDifferences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in statelevel data on fema ..."
Abstract

Cited by 775 (1 self)
 Add to MetaCart
at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the timeseries process do not perform well. Bootstrap (taking
Results 1  10
of
41,832