Results 1  10
of
820,716
CrossValidation Estimates IMSE
, 1994
"... Integrated Mean Squared Error (IMSE) is a version of the usual mean squared error criterion, averaged over all possible training sets of a given size. If it could be observed, it could be used to determine optimal network complexity or optimal data subsets for efficient training. We show that two co ..."
Abstract
 Add to MetaCart
common methods of crossvalidating average squared error deliver unbiased estimates of IMSE, converging to IMSE with probability one. These estimates thus make possible approximate IMSEbased choice of network complexity. We also show that two variants of cross validation measure provide unbiased IMSE
A Bound on the CrossValidation Estimate for Algorithm Assessment
"... Crossvalidation methods are commonly used as an eective way to estimate from a nite data set the generalization properties of a function approximator. It is common belief that crossvalidation, at the cost of an increased computational expense, returns an estimate of the real generalization error t ..."
Abstract
 Add to MetaCart
Crossvalidation methods are commonly used as an eective way to estimate from a nite data set the generalization properties of a function approximator. It is common belief that crossvalidation, at the cost of an increased computational expense, returns an estimate of the real generalization error
Analysis of variance of crossvalidation estimators of the generalization error
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... This paper brings together methods from two different disciplines: statistics and machine learning. We address the problem of estimating the variance of crossvalidation (CV) estimators of the generalization error. In particular, we approach the problem of variance estimation of the CV estimators of ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
This paper brings together methods from two different disciplines: statistics and machine learning. We address the problem of estimating the variance of crossvalidation (CV) estimators of the generalization error. In particular, we approach the problem of variance estimation of the CV estimators
Accuracy of Population Validity and CrossValidity Estimation: An Empirical Comparison of FormulaBased, Traditional Empirical, and Equal Weights Procedures
"... An empirical monte carlo study was performed using predictor and criterion data from 84,808 U.S. Air Force enlistees. 501 samples were drawn for each of seven sample size conditions: 25, 40, 60, 80, 100, 150, and 200. Using an eightpredictor model, 500 estimates for each of 9 validity and 11 cross ..."
Abstract
 Add to MetaCart
An empirical monte carlo study was performed using predictor and criterion data from 84,808 U.S. Air Force enlistees. 501 samples were drawn for each of seven sample size conditions: 25, 40, 60, 80, 100, 150, and 200. Using an eightpredictor model, 500 estimates for each of 9 validity and 11 crossvalidity
Crossvalidation estimation for frequencydependent I/Q imbalance in MIMOOFDM receivers
 J. Signal Process. Syst
, 2010
"... imbalance and filter mismatch, are extremely important for OFDM wireless accesses. This work presents a lowcomputational estimation of I/Q imbalances with filter mismatches to improve performance in MIMOOFDM receivers. For N×N MIMOOFDM systems, the proposed crossvalidation estimation is such tha ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
imbalance and filter mismatch, are extremely important for OFDM wireless accesses. This work presents a lowcomputational estimation of I/Q imbalances with filter mismatches to improve performance in MIMOOFDM receivers. For N×N MIMOOFDM systems, the proposed crossvalidation estimation
A Study of CrossValidation and Bootstrap for Accuracy Estimation and Model Selection
 INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 1995
"... We review accuracy estimation methods and compare the two most common methods: crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), te ..."
Abstract

Cited by 1248 (12 self)
 Add to MetaCart
We review accuracy estimation methods and compare the two most common methods: crossvalidation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection
Neural network ensembles, cross validation, and active learning
 Neural Information Processing Systems 7
, 1995
"... Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so it qua ..."
Abstract

Cited by 469 (6 self)
 Add to MetaCart
it quantifies the disagreement among the networks. It is discussed how to use the ambiguity in combination with crossvalidation to give a reliable estimate of the ensemble generalization error, and how this type of ensemble crossvalidation can sometimes improve performance. It is shown how to estimate
No Unbiased Estimator of the Variance of KFold CrossValidation
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2003
"... Most machine learning researchers perform quantitative experiments to estimate generalization error and compare the performance of different algorithms (in particular, their proposed algorithm). In order to be able to draw statistically convincing conclusions, it is important for them to also est ..."
Abstract

Cited by 60 (1 self)
 Add to MetaCart
estimate the uncertainty around the error (or error difference) estimate. This paper studies the very commonly used Kfold crossvalidation estimator of generalization performance. The main theorem shows that there exists no universal (valid under all distributions) unbiased estimator of the variance
Algorithmic Stability and SanityCheck Bounds for LeaveOneOut CrossValidation
 Neural Computation
, 1997
"... In this paper we prove sanitycheck bounds for the error of the leaveoneout crossvalidation estimate of the generalization error: that is, bounds showing that the worstcase error of this estimate is not much worse than that of the training error estimate. The name sanitycheck refers to the fact ..."
Abstract

Cited by 128 (1 self)
 Add to MetaCart
In this paper we prove sanitycheck bounds for the error of the leaveoneout crossvalidation estimate of the generalization error: that is, bounds showing that the worstcase error of this estimate is not much worse than that of the training error estimate. The name sanitycheck refers
CrossValidation and MeanSquare Stability
"... Abstract: kfold cross validation is a popular practical method to get a good estimate of the error rate of a learning algorithm. Here, the set of examples is first partitioned into k equalsized folds. Each fold acts as a test set for evaluating the hypothesis learned on the other k − 1 folds. The ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
. The average error across the k hypotheses is used as an estimate of the error rate. Although widely used, especially with small values of k (such as 10), the technique has heretofore resisted theoretical analysis. With only sanitycheck bounds known, there is not a compelling reason to use the kfold crossvalidation
Results 1  10
of
820,716