Results 1  10
of
14
Bayesian Network Structure Learning using Factorized NML Universal Models
, 2008
"... Universal codes/models can be used for data compression and model selection by the minimum description length (MDL) principle. For many interesting model classes, such as Bayesian networks, the minimax regret optimal normalized maximum likelihood (NML) universal model is computationally very deman ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Universal codes/models can be used for data compression and model selection by the minimum description length (MDL) principle. For many interesting model classes, such as Bayesian networks, the minimax regret optimal normalized maximum likelihood (NML) universal model is computationally very demanding. We suggest a computationally feasible alternative to NML for Bayesian networks, the factorized NML universal model, where the normalization is done locally for each variable. This can be seen as an approximate sumproduct algorithm. We show that this new universal model performs extremely well in model selection, compared to the existing stateoftheart, even for small sample sizes.
LogicalRule Models of Classification Response Times: A Synthesis of MentalArchitecture, RandomWalk, and DecisionBound Approaches
"... We formalize and provide tests of a set of logicalrule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mentalarchitecture, randomwalk, and decisionbound approaches. According to the models, people make indepe ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
We formalize and provide tests of a set of logicalrule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mentalarchitecture, randomwalk, and decisionbound approaches. According to the models, people make independent decisions about the locations of stimuli along a set of component dimensions. Those independent decisions are then combined via logical rules to determine the overall categorization response. The time course of the independent decisions is modeled via randomwalk processes operating along individual dimensions. Alternative mental architectures are used as mechanisms for combining the independent decisions to implement the logical rules. We derive fundamental qualitative contrasts for distinguishing among the predictions of the rule models and major alternative models of classification RT. We also use the models to predict detailed RT distribution data associated with individual stimuli in tasks of speeded perceptual classification.
Monte Carlo Estimation of Minimax Regret with an Application to MDL Model Selection
, 2008
"... Minimum description length (MDL) model selection, in its modern NML formulation, involves a model complexity term which is equivalent to minimax/maximin regret. When the data are discretevalued, the complexity term is a logarithm of a sum of maximized likelihoods over all possible datasets. Becaus ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Minimum description length (MDL) model selection, in its modern NML formulation, involves a model complexity term which is equivalent to minimax/maximin regret. When the data are discretevalued, the complexity term is a logarithm of a sum of maximized likelihoods over all possible datasets. Because the sum has an exponential number of terms, its evaluation is in many cases intractable. In the continuous case, the sum is replaced by an integral for which a closed form is available in only a few cases. We present an approach based on Monte Carlo sampling, which works for all model classes, and gives strongly consistent estimators of the minimax regret. The estimates convergence almost surely to the correct value with increasing number of iterations. For the important class of Markov models, one of the presented estimators is particularly efficient: in empirical experiments, accuracy that is sufficient for model selection is usually achieved already on the first iteration, even for long sequences.
Everyday Events 1 Running head: EVERYDAY EVENTS The Wisdom of Individuals: Exploring People’s Knowledge about Everyday Events using Iterated Learning
"... Determining the knowledge that guides human judgments is fundamental to understanding how people reason, make decisions, and form predictions. We use an experimental procedure called “iterated learning, ” in which the responses that people give on one trial are used to generate the data they see on ..."
Abstract
 Add to MetaCart
Determining the knowledge that guides human judgments is fundamental to understanding how people reason, make decisions, and form predictions. We use an experimental procedure called “iterated learning, ” in which the responses that people give on one trial are used to generate the data they see on the next, to pinpoint the knowledge that informs people’s predictions about everyday events (e.g., predicting the total box office gross of a movie from its current take). In particular, we use this method to discriminate between two models of human judgments: a simple Bayesian model (Griffiths & Tenenbaum, 2006) and a recentlyproposed alternative model that assumes people store only a few instances of each type of event in memory (MinK; Mozer, Pashler, & Homaei, 2008). Although testing these models using standard experimental procedure is difficult due to differences in the number of free parameters and the need to make assumptions about the knowledge of individual learners, we show that the two models make very different predictions about the outcome of iterated learning. The results of an experiment using this methodology provide a rich picture of how much people know about the
Word count of text and appendix: 10200 Running head: Design Optimization Corresponding author and address:
, 2008
"... Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in samplingb ..."
Abstract
 Add to MetaCart
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in samplingbased search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that use of the method has the potential to increase the informativeness of the experimental method. 2 1
Corresponding author:
, 2008
"... Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate i ..."
Abstract
 Add to MetaCart
Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images inserm00323633, version 1 22 Sep 2008 adaptively. We show that this approach is more robust to the presence of internal noise and, finally, we demonstrate that GAM is readily adapted to estimation of higherorder (nonlinear) classification images and to testing their significance.
Running Head: Methods of Model Evaluation and Comparison All correspondence to
, 2008
"... Computational models are powerful tools that can enhance understanding of scientific phenomena. The enterprise of modeling is most productive when the reasons underlying a model’s adequacy, and possibly its superiority to other models, are understood. This article begins with an overview of the main ..."
Abstract
 Add to MetaCart
Computational models are powerful tools that can enhance understanding of scientific phenomena. The enterprise of modeling is most productive when the reasons underlying a model’s adequacy, and possibly its superiority to other models, are understood. This article begins with an overview of the main criteria that must be considered in model evaluation and selection, in particular explaining why generalizability is the preferred criterion for model selection. This is followed by a review of measures of generalizability. In the final section, we demonstrate the use of five versatile and easytouse selection methods for choosing between two mathematical models of protein folding. Methods of Model Evaluation and Comparison 3
Domain: Signal and Image Processing
, 2007
"... Contributions à la microscopie à fluorescence en imagerie biologique: modélisation de la PSF, restauration d’images et détection superrésolutive Soutenue le 30 novembre 2007 devant le jury composé de JeanChristophe OLIVOMARIN Directeur ..."
Abstract
 Add to MetaCart
Contributions à la microscopie à fluorescence en imagerie biologique: modélisation de la PSF, restauration d’images et détection superrésolutive Soutenue le 30 novembre 2007 devant le jury composé de JeanChristophe OLIVOMARIN Directeur
CHAPTER ELEVEN Evaluation and Comparison of Computational Models
"... 2. Conceptual Overview of Model Evaluation and Comparison 288 ..."