Results 1  10
of
9,723
Expected Error Analysis for Model Selection
 International Conference on Machine Learning (ICML
, 1999
"... In order to select a good hypothesis language (or model) from a collection of possible models, one has to assess the generalization performance of the hypothesis which is returned by a learner that is bound to use some particular model. This paper deals with a new and very efficient way of assessing ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
of assessing this generalization performance. We present a new analysis which characterizes the expected generalization error of the hypothesis with least training error in terms of the distribution of error rates of the hypotheses in the model. This distribution can be estimated very efficiently from the data
EXPECTATION ERRORS, UNCERTAINTY AND ECONOMIC ACTIVITY∗
, 2011
"... The views expressed in this working paper are those of the author(s) and do not necessarily represent the official views of the ..."
Abstract
 Add to MetaCart
The views expressed in this working paper are those of the author(s) and do not necessarily represent the official views of the
ARGENTINE TERMS OF TRADE VOLATILITY. HANDLING STRUCTURAL BREAKS AND EXPECTATIONS ERRORS.
"... Argentine terms of trade volatility. Handling structural breaks and expectations errors. ..."
Abstract
 Add to MetaCart
Argentine terms of trade volatility. Handling structural breaks and expectations errors.
Estimating the Expected Error of Empirical Minimizers for Model Selection
 In Proceedings of the Fifteenth National Conference on Arti Intelligence
, 1998
"... Model selection [e.g., 1] is considered the problem of choosing a hypothesis language which provides an optimal balance between low empirical error and high structural complexity. In this Abstract, we discuss the intuition of a new, very efficient approach to model selection. Our approach is inheren ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
is inherently Bayesian [e.g., 2], but instead of using priors on target functions or hypotheses, we talk about priors on error values  which leads us to a new mathematical characterization of the expected true error. In the setting of classification learning, a learner is given a sample, drawn according
Quantal Response Equilibria For Normal Form Games
 NORMAL FORM GAMES, GAMES AND ECONOMIC BEHAVIOR
, 1995
"... We investigate the use of standard statistical models for quantal choice in a game theoretic setting. Players choose strategies based on relative expected utility, and assume other players do so as well. We define a Quantal Response Equilibrium (QRE) as a fixed point of this process, and establish e ..."
Abstract

Cited by 647 (28 self)
 Add to MetaCart
We investigate the use of standard statistical models for quantal choice in a game theoretic setting. Players choose strategies based on relative expected utility, and assume other players do so as well. We define a Quantal Response Equilibrium (QRE) as a fixed point of this process, and establish
Optimal Brain Damage
, 1990
"... We have used informationtheoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved sp ..."
Abstract

Cited by 510 (5 self)
 Add to MetaCart
We have used informationtheoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved
Algorithms for Nonnegative Matrix Factorization
 In NIPS
, 2001
"... Nonnegative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minim ..."
Abstract

Cited by 1246 (5 self)
 Add to MetaCart
to minimize the conventional least squares error while the other minimizes the generalized KullbackLeibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm
Thresholding of statistical maps in functional neuroimaging using the false discovery rate.
 NeuroImage
, 2002
"... Finding objective and effective thresholds for voxelwise statistics derived from neuroimaging data has been a longstanding problem. With at least one test performed for every voxel in an image, some correction of the thresholds is needed to control the error rates, but standard procedures for mult ..."
Abstract

Cited by 521 (9 self)
 Add to MetaCart
Finding objective and effective thresholds for voxelwise statistics derived from neuroimaging data has been a longstanding problem. With at least one test performed for every voxel in an image, some correction of the thresholds is needed to control the error rates, but standard procedures
Estimating the number of clusters in a dataset via the Gap statistic
, 2000
"... We propose a method (the \Gap statistic") for estimating the number of clusters (groups) in a set of data. The technique uses the output of any clustering algorithm (e.g. kmeans or hierarchical), comparing the change in within cluster dispersion to that expected under an appropriate reference ..."
Abstract

Cited by 502 (1 self)
 Add to MetaCart
We propose a method (the \Gap statistic") for estimating the number of clusters (groups) in a set of data. The technique uses the output of any clustering algorithm (e.g. kmeans or hierarchical), comparing the change in within cluster dispersion to that expected under an appropriate reference
Results 1  10
of
9,723