Results 1  10
of
61
Active Learning with Statistical Models
, 1995
"... For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We then showhow the same principles may be used to select data for two alternative, statist ..."
Abstract

Cited by 679 (12 self)
 Add to MetaCart
(Show Context)
For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We then showhow the same principles may be used to select data for two alternative, statisticallybased learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate.
Neural network exploration using optimal experiment design
 Neural Networks
, 1994
"... We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network lear ..."
Abstract

Cited by 163 (2 self)
 Add to MetaCart
(Show Context)
We consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely.We conclude that, while not a panacea, OEDbased query/action has muchto offer, especially in domains where its high computational costs can be tolerated.
Assessing the quality of learned local models
 Advances in Neural Information Processing Systems 6
, 1994
"... An approach is presented to learning high dimensional functions in the case where the learning algorithm can affect the generation of new data. A local modeling algorithm, locally weighted regression, is used to represent the learned function. Architectural parameters of the approach, such as distan ..."
Abstract

Cited by 47 (16 self)
 Add to MetaCart
An approach is presented to learning high dimensional functions in the case where the learning algorithm can affect the generation of new data. A local modeling algorithm, locally weighted regression, is used to represent the learned function. Architectural parameters of the approach, such as distance metrics, are also localized and become a function of the query point instead of being global. Statistical tests are given for when a local model is good enough and sampling should be moved to a new area. Our methods explicitly deal with the case where prediction accuracy requirements exist during exploration: By gradually shifting a “center of exploration ” and controlling the speed of the shift with local prediction accuracy, a goaldirected exploration of state space takes place along the fringes of the current data support until the task goal is achieved. We illustrate this approach with simulation results and results from a real robot learning a complex juggling task. 1
Explaining the FavoriteLongshot Bias: Is it RiskLove or Misperceptions?
, 2007
"... The favoritelongshot bias presents a challenge for theories of decision making under uncertainty. This longstanding empirical regularity is that betting odds provide biased estimates of the probability of a horse winning—longshots are overbet, while favorites are underbet. Neoclassical explanations ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
The favoritelongshot bias presents a challenge for theories of decision making under uncertainty. This longstanding empirical regularity is that betting odds provide biased estimates of the probability of a horse winning—longshots are overbet, while favorites are underbet. Neoclassical explanations focus on rational gamblers who overbet longshots due to risklove. The competing behavioral explanations emphasize the role of misperceptions of probabilities. We provide novel empirical tests that can discriminate between these competing theories by focusing on the pricing of compound bets. We test whether the models that explain gamblers ’ choices in one part of their choice set (betting to win) can also rationalize decisions over a wider choice set, including compound bets in the exacta, quinella or trifecta pools. Using a new, largescale dataset ideally suited to implement these tests we find evidence in favor of the view that misperceptions of probability drive the favoritelongshot bias, as suggested by Prospect Theory. Along the way we provide more robust evidence on the favoritelongshot bias, falsifying the conventional wisdom that the bias is large enough to yield profit opportunities (it isn’t) and that it becomes more severe in the last race (it doesn’t). ∗We thank David Siegel of Equibase for supplying the data, and Scott Hereld and Ravi Pillai for their
2001, “Capital Requirements, Business Loans and Business Cycles: An Empirical Analysis of the Standardized Approach
 in the New Basel Capital Accord,” November 13, Board of Governors of the Federal Reserve System Working Paper
"... In the current regulatory framework, capital requirements are based on riskweighted assets, but all business loans carry a uniform risk weight, irrespective of variations in credit risk. The proposed new Capital Accord of the Bank for International Settlements provides for a greater sensitivity of c ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
In the current regulatory framework, capital requirements are based on riskweighted assets, but all business loans carry a uniform risk weight, irrespective of variations in credit risk. The proposed new Capital Accord of the Bank for International Settlements provides for a greater sensitivity of capital requirements to credit risk, raising the question of whether, and to what extent, the new capital standards will intensify business cycles. In this paper, we evaluate the potential cyclical effects of the “standardized approach ” to risk evaluation in the new Accord, which involves the ratings of external agencies. We combine Moody’s data on changes in U.S. borrowers ’ credit ratings since 1970 with estimates of the risk profile of business loans at commercial banks from the Survey of Terms of Business Lending, and also a risk profile estimated by Treacy and Carey (1998). We find that the level of required capital against business loans would be noticeably lower under the new Accord compared with the current regime. We do not find evidence of any substantial additional cyclicality in required capital levels under the standardized approach of the new Accord relative to the current regime.
Minimizing Statistical Bias with Queries
, 1995
"... I describe an exploration criterion that attempts to minimize the error of a learner by minimizing its estimated squared bias. I describe experiments with locallyweighted regression on two simple kinematics problems, and observe that this "biasonly" approach outperforms the more common ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
I describe an exploration criterion that attempts to minimize the error of a learner by minimizing its estimated squared bias. I describe experiments with locallyweighted regression on two simple kinematics problems, and observe that this "biasonly" approach outperforms the more common "varianceonly" exploration approach, even in the presence of noise.
A new inferential test for path models based on directed acyclic graphs. Structural Equation Modeling
"... This article introduces a new inferential test for acyclic structural equation models (SEM) without latent variables or correlated errors. The test is based on the independence relations predicted by the directed acyclic graph of the SEMs, as given by the concept of dseparation. A wide range of dis ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
This article introduces a new inferential test for acyclic structural equation models (SEM) without latent variables or correlated errors. The test is based on the independence relations predicted by the directed acyclic graph of the SEMs, as given by the concept of dseparation. A wide range of distributional assumptions and structural functions can be accommodated. No iterative fitting procedures are used, precluding problems involving convergence. Exact probability estimates can be obtained, thus permitting the testing of models with small data sets. Structural equations represent the translation of a hypothesized series of cause–effect relationships between variables into a composite statistical hypothesis concerning patterns of statistical dependencies. The development of an inferential test for such a composite statistical hypothesis (see Bollen, 1989, for a historical summary) has had a large impact on fields of study in which multivariate causal hypotheses cannot be tested through randomized experiments. The various statistical innovations that were spawned by this method have mostly followed the same basic logic. A series of hypothesized causal relationships between the variables are combined to form a directed graph (the path model). This directed graph implies a series of path coefficients, some of which are fixed to some a priori value (usually zero) and the rest of which are free to vary. These free parameters are estimated by minimizing some discrepancy measure such as the maximum likelihood loss function. The predicted variance–covariance matrix, implied by the set of fully parameterized structural equations, is then compared to the sample variance–covariance matrix using a fit statistic that has a known, usually asymptotic, probability distribution. Requests for reprints should be sent to Bill Shipley, Département de Biologie, Université de
A Constructive Learning Algorithm for Local Model Networks
 in `Proceedings of the IEEE Workshop on ComputerIntensive Methods in Control and Signal Processing
, 1995
"... Local Model Networks are flexible architectures for the representation of complex nonlinear dynamic systems. The local nature of the representation leads to a modular network which can integrate a variety of paradigms (neural nets, statistics, fuzzy systems and a priori mathematical models), but be ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Local Model Networks are flexible architectures for the representation of complex nonlinear dynamic systems. The local nature of the representation leads to a modular network which can integrate a variety of paradigms (neural nets, statistics, fuzzy systems and a priori mathematical models), but because of the power of the local models, the architecture is less sensitive to the curse of dimensionality than other local representations, such as Radial Basis Function networks. The concept of `locality' is a difficult one to define, and tends to vary over a problem's input space, so a constructive structure identification algorithm is presented which automatically defines a suitable model structure on the basis of the observed data from the process being identified. Local learning algorithms are introduced for the local model parameter optimisation, which save computational effort and produce more interpretable and robust models. 1. Introduction Computationally intensive learning systems...
From pixels to physics: Probabilistic color derendering
 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, 2012
"... Consumer digital cameras use tonemapping to produce compact, narrowgamut images that are nonetheless visually pleasing. In doing so, they discard or distort substantial radiometric signal that could otherwise be used for computer vision. Existing methods attempt to undo these effects through de ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Consumer digital cameras use tonemapping to produce compact, narrowgamut images that are nonetheless visually pleasing. In doing so, they discard or distort substantial radiometric signal that could otherwise be used for computer vision. Existing methods attempt to undo these effects through deterministic maps that derender the reported narrowgamut colors back to their original widegamut sensor measurements. Deterministic approaches are unreliable, however, because the reverse narrowtowide mapping is onetomany and has inherent uncertainty. Our solution is to use probabilistic maps, providing uncertainty estimates useful to many applications. We use a nonparametric Bayesian regression technique—local Gaussian process regression—to learn for each pixel’s narrowgamut color a probability distribution over the scene colors that could have created it. Using a variety of consumer cameras we show that these distributions, once learned from training data, are effective in simple probabilistic adaptations of two popular applications: multiexposure imaging and photometric stereo. Our results on these applications are better than those of corresponding deterministic approaches, especially for saturated and outofgamut colors. 1.
An Introduction to PROC LOESS for Local Regression
 Proceedings of the 24th SAS ® Users Group International Conference, Paper
, 1999
"... ..."