Results 11  20
of
45
Decision making and learning while taking sequential risks
 Journal of Experimental Psychology – Learning Memory and Cognition
, 2008
"... A sequential risktaking paradigm used to identify realworld risk takers invokes both learning and decision processes. This article expands the paradigm to a larger class of tasks with different stochastic environments and different learning requirements. Generalizing a Bayesian sequential risktak ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
A sequential risktaking paradigm used to identify realworld risk takers invokes both learning and decision processes. This article expands the paradigm to a larger class of tasks with different stochastic environments and different learning requirements. Generalizing a Bayesian sequential risktaking model to the larger set of tasks clarifies the roles of learning and decision making during sequential risky choice. Results show that respondents adapt their learning processes and associated mental representations of the task to the stochastic environment. Furthermore, their Bayesian learning processes are shown to interfere with the paradigm’s identification of risky drug use, whereas the decisionmaking process facilitates its diagnosticity. Theoretical implications of the results in terms of both understanding risktaking behavior and improving risktaking assessment methods are discussed.
Acquiring selectional preferences in a thai lexical database
 the 1st Joint Conference on Natural Language Processing (IJCNLP04
, 2004
"... In this paper, we consider the problem of enriching a Thai lexical database by extending the semantic information with selectional preferences. We propose a novel approach for acquiring selectional preferences of verbs, which is motivated by the tree cut model. We apply a model selection technique c ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In this paper, we consider the problem of enriching a Thai lexical database by extending the semantic information with selectional preferences. We propose a novel approach for acquiring selectional preferences of verbs, which is motivated by the tree cut model. We apply a model selection technique called the Bayesian Information Criterion (BIC). Given a semantic hierarchy, our goal is to generalize initial noun classes to the most plausible levels on that hierarchy. We present an iterative algorithm for generalization. The algorithm performs agglomerative merging on the semantic hierarchy in a bottomup manner. The BIC is used to measure the improvement of the model both locally and globally. In our experiments, we consider the Web as a large corpus. We also propose approaches for extracting examples from the Web. Preliminarily experimental results are given to show the feasibility and effectiveness of our approach. 1
2002a): ELeaRNT: Evolutionary learning of rich neural network topologies
, 2002
"... In this paper we focus on the problem of using a genetic algorithm for model selection within a Bayesian framework. We propose to reduce the model selection problem to a search problem solved using evolutionary computation to explore a posterior distribution over the model space. As a case study, we ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we focus on the problem of using a genetic algorithm for model selection within a Bayesian framework. We propose to reduce the model selection problem to a search problem solved using evolutionary computation to explore a posterior distribution over the model space. As a case study, we introduce ELeaRNT (Evolutionary Learning of Rich Neural Network Topologies), a genetic algorithm which evolves a particular class of models, namely, Rich Neural Networks (RNN), in order to find an optimal domainspecific nonlinear function approximator with a good generalization capability. In order to evolve this kind of neural networks, ELeaRNT uses a Bayesian fitness function. The experimental results prove that ELeaRNT using a Bayesian fitness function finds, in a completely automated way, networks wellmatched to the analysed problem, with acceptable complexity.
Schwarz, Wallace, and Rissanen: Intertwining Themes in Theories of Model Selection
, 2000
"... Investigators interested in model order estimation have tended to divide themselves into widely separated camps; this survey of the contributions of Schwarz, Wallace, Rissanen, and their coworkers attempts to build bridges between the various viewpoints, illuminating connections which may have pr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Investigators interested in model order estimation have tended to divide themselves into widely separated camps; this survey of the contributions of Schwarz, Wallace, Rissanen, and their coworkers attempts to build bridges between the various viewpoints, illuminating connections which may have previously gone unnoticed and clarifying misconceptions which seem to have propagated in the applied literature. Our tour begins with Schwarz's approximation of Bayesian integrals via Laplace's method. We then introduce the concepts underlying Rissanen 's minimum description length principle via a Bayesian scenario with a known prior; this provides the groundwork for understanding his more complex nonBayesian MDL which employs a "universal" encoding of the integers. Rissanen's method of parameter truncation is contrasted with that employed in various versions of Wallace's minimum message length criteria.
A Bayesian Approach to Financial Model Calibration, Uncertainty Measures and Optimal Hedging
"... Michaelmas 2009This thesis is dedicated to the late ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Michaelmas 2009This thesis is dedicated to the late
Counterexamples to a Likelihood Theory of Evidence Final Corrections, August 30, 2006. Forthcoming in Minds and Machines.
"... Abstract: The Likelihood Theory of Evidence (LTE) says, roughly, that all the information relevant to the bearing of data on hypotheses (or models) is contained in the likelihoods. There exist counterexamples in which one can tell which of two hypotheses is true from the full data, but not from the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract: The Likelihood Theory of Evidence (LTE) says, roughly, that all the information relevant to the bearing of data on hypotheses (or models) is contained in the likelihoods. There exist counterexamples in which one can tell which of two hypotheses is true from the full data, but not from the likelihoods alone. These examples suggest that some forms of scientific reasoning, such as the consilience of inductions (Whewell, 1858), cannot be represented within Bayesian and Likelihoodist philosophies of science. Key words: The likelihood principle, the law of likelihood, evidence, Bayesianism, Likelihoodism, curve fitting, regression, asymmetry of cause and effect.
Predictive Accuracy as an Achievable Goal of Science
"... What has science actually achieved? A theory of achievement should (1) define what has been achieved, (2) describe the means or methods used in science, and (3) explain how such methods lead to such achievements. Predictive accuracy is one truthrelated achievement of science, and there is an explan ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
What has science actually achieved? A theory of achievement should (1) define what has been achieved, (2) describe the means or methods used in science, and (3) explain how such methods lead to such achievements. Predictive accuracy is one truthrelated achievement of science, and there is an explanation of why common scientific practices (of trading off simplicity and fit) tend to increase predictive accuracy. Akaike’s explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the general framework, for it also provides a clear formulation of many open problems of research.
Penalized quadratic inference functions for variable selection in longitudinal research
, 2006
"... For decades, much research has been devoted to developing and comparing variable selection methods, but primarily for the classical case of independent observations. Existing variableselection methods can be adapted to clustercorrelated observations, but some adaptation is required. For example, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
For decades, much research has been devoted to developing and comparing variable selection methods, but primarily for the classical case of independent observations. Existing variableselection methods can be adapted to clustercorrelated observations, but some adaptation is required. For example, classical model fit statistics such as AIC and BIC are undefined if the likelihood function is unknown (Pan, 2001). Little research has been done on variable selection for generalized estimating equations (GEE, Liang and Zeger, 1986) and similar correlated data approaches. This thesis will review existing work on model selection for GEE and propose new model selection options for GEE, as well as for a more sophisticated marginal modeling approach based on quadratic inference functions (QIF, Qu, Lindsay, and Li, 2000), which has better asymptotic properties than classic GEE. The focus is on selection using continuous penalties such as LASSO (Tibshirani, 1996) or SCAD (Fan and Li, 2001) rather than the older discrete penalties such as AIC and BIC. The
SPARSE MODEL FITTING IN NESTED FAMILIES: BAYESIAN APPROACH VS PENALIZED LIKELIHOOD
"... We study the problem of model fitting in the framework of nested probabilistic families. Our criteria are: (i) sparsity of the identified representation, (ii) its ability to fit the (finite length) data set available. As we show in this paper, current methodologies, often taking the form of penalize ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We study the problem of model fitting in the framework of nested probabilistic families. Our criteria are: (i) sparsity of the identified representation, (ii) its ability to fit the (finite length) data set available. As we show in this paper, current methodologies, often taking the form of penalized versions of the data likelihood, cannot simultaneously satisfy these requirements, as the examples presented clearly demonstrate. On the contrary, maximization of the Bayesian model posterior, even without assumption of a complexity penalizing prior, is able to select models with appropriate complexity, enabling sound determination of its parameters in a second step. 1.1 Problem formulation 1.
Cue integration vs. exemplarbased reasoning in multiattribute decisions from memory: A matter of cue representation
 Judgment and Decision Making
, 2010
"... Inferences about target variables can be achieved by deliberate integration of probabilistic cues or by retrieving similar cuepatterns (exemplars) from memory. In tasks with cue information presented in onscreen displays, rulebased strategies tend to dominate unless the abstraction of cuetarget ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Inferences about target variables can be achieved by deliberate integration of probabilistic cues or by retrieving similar cuepatterns (exemplars) from memory. In tasks with cue information presented in onscreen displays, rulebased strategies tend to dominate unless the abstraction of cuetarget relations is unfeasible. This dominance has also been demonstrated — surprisingly — in experiments that demanded the retrieval of cue values from memory (M. Persson & J. Rieskamp, 2009). In three modified replications involving a fictitious disease, binary cue values were represented either by alternative symptoms (e.g., fever vs. hypothermia) or by symptom presence vs. absence (e.g., fever vs. no fever). The former representation might hinder cue abstraction. The cues were predictive of the severity of the disease, and participants had to infer in each trial who of two patients was sicker. Both experiments replicated the ruledominance with presentabsent cues but yielded higher percentages of exemplarbased strategies with alternative cues. The experiments demonstrate that a change in cue representation may induce a dramatic shift from rulebased to exemplarbased reasoning in formally identical tasks.