Results 11  20
of
39
A hierarchical Bayesian model of human decisionmaking on an optimal stopping problem
 Cognitive Science
, 2006
"... Wiener diffusion accounts of human decisionmaking are among the most successful and best developed formal models in the psychological sciences. We reconsider these models from a Bayesian perspective, using graphical modeling, and Markov Chain MonteCarlo methods for posterior sampling. By analyzing ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Wiener diffusion accounts of human decisionmaking are among the most successful and best developed formal models in the psychological sciences. We reconsider these models from a Bayesian perspective, using graphical modeling, and Markov Chain MonteCarlo methods for posterior sampling. By analyzing seminal data from a brightness discrimination task, we show how the Bayesian approach offers several avenues for extending and improving diffusion models. These possibilities include the hierarchical modeling of stimulus properties, and modeling the role of contaminant processes in generating experimental data. We also argue that the Bayesian approach challenges some basic assumptions of previous diffusion models, involving how variability in decisionmaking should be interpreted. We conclude that adopting a Bayesian approach to relating diffusion models and human decisionmaking data will sharpen the theoretical and empirical questions, and improve our understanding of a basic human cognitive ability. BAYESIAN DIFFUSION DECISIONMAKING 2
Population Markov Chain Monte Carlo
 Machine Learning
, 2003
"... Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima, requiring random restarts to obtain solutions of acceptable quality. We compare three stochastic search algorithms: a MetropolisHastings Sampler (MHS), an Evolutionary Algorithm (EA), and a new hybrid algorithm called Population Markov Chain Monte Carlo, or popMCMC. PopMCMC uses statistical information from a population of MHSs to inform the proposal distributions for individual samplers in the population. Experimental results show that popMCMC and EAs learn more efficiently than the MHS with no information exchange. Populations of MCMC samplers exhibit more diversity than populations evolving according to EAs not satisfying physicsinspired local reversibility conditions. KEY WORDS: Markov Chain Monte Carlo, MetropolisHastings Algorithm, Graphical Probabilistic Models, Bayesian Networks, Bayesian Learning, Evolutionary Algorithms Machine Learning MCMC Issue 1 5/16/01 1.
Learning hybrid Bayesian networks from data
, 1998
"... We illustrate two different methodologies for learning Hybrid Bayesian networks, that is, Bayesian networks containing both continuous and discrete variables, from data. The two methodologies differ in the way of handling continuous data when learning the Bayesian network structure. The first method ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We illustrate two different methodologies for learning Hybrid Bayesian networks, that is, Bayesian networks containing both continuous and discrete variables, from data. The two methodologies differ in the way of handling continuous data when learning the Bayesian network structure. The first methodology uses discretized data to learn the Bayesian network structure, and the original nondiscretized data for the parameterization of the learned structure. The second methodology uses nondiscretized data both to learn the Bayesian network structure and its parameterization. For the direct handling of continuous data, we propose the use of artificial neural networks as probability estimators, to be used as an integral part of the scoring metric defined to search the space of Bayesian network structures. With both methodologies, we assume the availability of a complete dataset, with no missing values or hidden variables. We report experimental results aimed at comparing the two methodologies. These results provide evidence that learning with discretized data presents advantages both in terms of efficiency and in terms of accuracy of the learned models over the alternative approach of using nondiscretized data.
Bayesian Partitioning for Classification and Regression
, 1999
"... In this paper we propose a new Bayesian approach to data modelling. The Bayesian partition model constructs arbitrarily complex regression and classification surfaces by splitting the design space into an unknown number of disjoint regions. Within each region the data is assumed to be exchangeable a ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
In this paper we propose a new Bayesian approach to data modelling. The Bayesian partition model constructs arbitrarily complex regression and classification surfaces by splitting the design space into an unknown number of disjoint regions. Within each region the data is assumed to be exchangeable and to come from some simple distribution. Using conjugate priors the marginal likelihoods of the models can be obtained analytically for any proposed partitioning of the space where the number and location of the regions is assumed unknown a priori. Markov chain Monte Carlo simulation techniques are used to obtain distributions on partition structures and by averaging across samples smooth prediction surfaces are formed.
Bayes Optimal InstanceBased Learning
 MACHINE LEARNING: ECML98, PROCEEDINGS OF THE 10TH EUROPEAN CONFERENCE, VOLUME 1398 OF LECTURE
, 1998
"... In this paper we present a probabilistic formalization of the instancebased learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a datadriven instancebased learning approach, is equivalent to averaging over all the (possibly infinitely many) indiv ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In this paper we present a probabilistic formalization of the instancebased learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a datadriven instancebased learning approach, is equivalent to averaging over all the (possibly infinitely many) individual models. The general Bayesian instancebased learning framework described in this paper can be applied with any set of assumptions defining a parametric model family, and to any discrete prediction task where the number of simultaneously predicted attributes is small, which includes for example all classification tasks prevalent in the machine learning literature. To illustrate the use of the suggested general framework in practice, we show how the approach can be implemented in the special case with the strong independence assumptions underlying the so called Naive Bayes classifier. The resulting Bayesian instancebased classifier is validated empirically with public domain data sets...
Applying General Bayesian Techniques to Improve TAN Induction
 In Proceedings of the International Conference on Knowledge Discovery and Data Mining
, 1999
"... Tree Augmented Naive Bayes (TAN) has shown to be competitive with stateoftheart machine learning algorithms [9]. However, the TAN induction algorithm that appears in [9] can be improved in several ways. In this paper we identify three weak points in it and introduce two ideas to overcome those pro ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Tree Augmented Naive Bayes (TAN) has shown to be competitive with stateoftheart machine learning algorithms [9]. However, the TAN induction algorithm that appears in [9] can be improved in several ways. In this paper we identify three weak points in it and introduce two ideas to overcome those problems: the multinomial sampling approach to learning bayesian networks and local bayesian model averaging. These ideas are generic and can thus be reused to improve other learning algorithms. We empirically test the new algorithms, and conclude that in many cases they lead to an improvement in accuracy in the classification and in the quality of the probabilities given as predictions.
Robust action strategies to induce desired effects
 IEEE Transactions on Systems, Man & Cybernetics  Part A: Systems and Humans
, 2004
"... A new methodology is given in this paper to obtain a nearoptimal strategy (i.e., specification of courses of action over time), which is also robust to environmental perturbations (unexpected events and/or parameter uncertainties), to achieve the desired effects. A dynamic Bayesian network (DBN)ba ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A new methodology is given in this paper to obtain a nearoptimal strategy (i.e., specification of courses of action over time), which is also robust to environmental perturbations (unexpected events and/or parameter uncertainties), to achieve the desired effects. A dynamic Bayesian network (DBN)based stochastic mission model is employed to represent the dynamic and uncertain nature of the environment. Genetic algorithms are applied to search for a nearoptimal strategy with DBN serving as a fitness evaluator. The probability of achieving the desired effects (namely, the probability of success) at a specified terminal time is a random variable due to uncertainties in the environment. Consequently, we focus on signaltonoise ratio (SNR), a measure of mean and variance of the probability of success, to gauge the goodness of a strategy. The resulting strategy will not only have a relatively high probability of inducing the desired effects, but also be robust to environmental uncertainties. Keywords: Effectsbased operations, optimization, organizational design, robustness, signaltonoise ratio, Taguchi method, dynamic Bayesian networks, genetic algorithms, confidence region, hypothesis testing
Tractable Bayesian Learning of Tree Augmented Naive Bayes Classifiers
 In Proceedings of the Twentieth International Conference on Machine Learning
, 2003
"... Bayesian classifiers such as Naive Bayes or Tree Augmented Naive Bayes (TAN) have shown excellent performance given their simplicity and heavy underlying independence assumptions. In this paper we introduce a classifier taking as basis the TAN models and taking into account uncertainty in model sele ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Bayesian classifiers such as Naive Bayes or Tree Augmented Naive Bayes (TAN) have shown excellent performance given their simplicity and heavy underlying independence assumptions. In this paper we introduce a classifier taking as basis the TAN models and taking into account uncertainty in model selection. To do this we introduce decomposable distributions over TANs and show that the expression resulting from the Bayesian model averaging of TAN models can be integrated into closed form if we assume the prior probability distribution to be a decomposable distribution. This result allows for the construction of a classifier with a shorter learning time and a longer classification time than TAN. Empirical results show that the classifier is, most of the cases, more accurate than TAN and approximates better the class probabilities. 1.
2002a): ELeaRNT: Evolutionary learning of rich neural network topologies
, 2002
"... In this paper we focus on the problem of using a genetic algorithm for model selection within a Bayesian framework. We propose to reduce the model selection problem to a search problem solved using evolutionary computation to explore a posterior distribution over the model space. As a case study, we ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we focus on the problem of using a genetic algorithm for model selection within a Bayesian framework. We propose to reduce the model selection problem to a search problem solved using evolutionary computation to explore a posterior distribution over the model space. As a case study, we introduce ELeaRNT (Evolutionary Learning of Rich Neural Network Topologies), a genetic algorithm which evolves a particular class of models, namely, Rich Neural Networks (RNN), in order to find an optimal domainspecific nonlinear function approximator with a good generalization capability. In order to evolve this kind of neural networks, ELeaRNT uses a Bayesian fitness function. The experimental results prove that ELeaRNT using a Bayesian fitness function finds, in a completely automated way, networks wellmatched to the analysed problem, with acceptable complexity.
A Hierarchical Bayes Approach to Variable Selection for Generalized Linear Models
, 2004
"... For the problem of variable selection in generalized linear models, we develop various adaptive Bayesian criteria. Using a hierarchical mixture setup for model uncertainty, combined with an integrated Laplace approximation, we derive Empirical Bayes and Fully Bayes criteria that can be computed easi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
For the problem of variable selection in generalized linear models, we develop various adaptive Bayesian criteria. Using a hierarchical mixture setup for model uncertainty, combined with an integrated Laplace approximation, we derive Empirical Bayes and Fully Bayes criteria that can be computed easily and quickly. The performance of these criteria is assessed via simulation and compared to other criteria such as AIC and BIC on normal, logistic and Poisson regression model classes. A Fully Bayes criterion based on a restricted region hyperprior seems to be the most promising.