Results 1  10
of
32
Locally Bayesian Learning with Applications to Retrospective Revaluation and Highlighting
 Psychological Review
, 2006
"... A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to backpropagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probab ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to backpropagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probability of the next component’s target. Each layer then does locally Bayesian learning. The approach assumes online trialbytrial learning. The resulting parameter updating is not globally Bayesian but can better capture human behavior. The approach is implemented for an associative learning model that first maps inputs to attentionally filtered inputs and then maps attentionally filtered inputs to outputs. The Bayesian updating allows the associative model to exhibit retrospective revaluation effects such as backward blocking and unovershadowing, which have been challenging for associative learning models. The backpropagation of target values to attention allows the model to show trialorder effects, including highlighting and differences in magnitude of forward and backward blocking, which have been challenging for Bayesian learning models.
A generalpurpose tunable landscape generator
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 2006
"... The research literature on metaheuristic and evolutionary computation has proposed a large number of algorithms for the solution of challenging realworld optimization problems. It is often not possible to study theoretically the performance of these algorithms unless significant assumptions are ma ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The research literature on metaheuristic and evolutionary computation has proposed a large number of algorithms for the solution of challenging realworld optimization problems. It is often not possible to study theoretically the performance of these algorithms unless significant assumptions are made on either the algorithm itself or the problems to which it is applied, or both. As a consequence, metaheuristics are typically evaluated empirically using a set of test problems. Unfortunately, relatively little attention has been given to the development of methodologies and tools for the largescale empirical evaluation and/or comparison of metaheuristics. In this paper, we propose a landscape (testproblem) generator that can be used to generate optimization problem instances for continuous, boundconstrained optimization problems. The landscape generator is parameterized by a small number of parameters, and the values of these parameters have a direct and intuitive interpretation in terms of the geometric features of the landscapes that they produce. An experimental space is defined over algorithms and problems, via a tuple of parameters for any specified algorithm and problem class (here determined by the landscape generator). An experiment is then clearly specified as a point in this space, in a way that is analogous to other areas of experimental algorithmics, and more generally in experimental design. Experimental results are presented, demonstrating the use of the landscape generator. In particular, we analyze some simple, continuous estimation of distribution algorithms, and gain new insights into the behavior of these algorithms using the landscape generator.
Online Feature Selection with Streaming Features
, 2012
"... We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for online streaming feature selection include (1) the continuous growth of feature volumes over time; (2) a large feature space, possibly of unknown or infinite size; and (3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection (OSFS) method to select strongly relevant and nonredundant features on the fly. An efficient FastOSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on highdimensional datasets and also with a realworld case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.
WeC15.3 Bayesian Networks for Cardiovascular Monitoring
"... Abstract — Bayesian Networks provide a flexible way of incorporating different types of information into a single probabilistic model. In a medical setting, one can use these networks to create a patient model that incorporates lab test results, clinician observations, vital signs, and other forms o ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract — Bayesian Networks provide a flexible way of incorporating different types of information into a single probabilistic model. In a medical setting, one can use these networks to create a patient model that incorporates lab test results, clinician observations, vital signs, and other forms of patient data. In this paper, we explore a simple Bayesian Network model of the cardiovascular system and evaluate its ability to predict unobservable variables using both real and simulated patient data. I.
Uncorrelated Encounter Model of the National Airspace System Version 1.0
, 2008
"... Approved for public release; distribution is unlimited. Lexington Airspace encounter models, covering close encounter situations that may occur after standard separation assurance has been lost, are a critical component in the safety assessment of aviation procedures and collision avoidance systems. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Approved for public release; distribution is unlimited. Lexington Airspace encounter models, covering close encounter situations that may occur after standard separation assurance has been lost, are a critical component in the safety assessment of aviation procedures and collision avoidance systems. Of particular relevance to Unmanned Aircraft Systems (UAS) is the potential for encountering general aviation aircraft that are flying under Visual Flight Rules (VFR) and which may not be in contact with air traffic control. In response to the need to develop a model of these types of encounters, Lincoln Laboratory undertook an extensive radar data collection and modeling effort involving more than 120 sensors across the U.S. This report describes the structure and content of that encounter model. The model is based on the use of Bayesian networks to represent relationships between dynamic variables and to construct random aircraft trajectories that are statistically similar to those observed in the radar data. The result is a framework
Model assisted approaches to complex survey sampling from finite populations using Bayesian networks: a tool for integration of different sources
 Proceedings of XXII Statistics Canada International Methodology Symposium
, 2005
"... A class of estimators based on the dependency structure of a multivariate variable of interest and the survey design is defined. The dependency structure is the one described by the Bayesian networks. This class allows ratio type estimators as a subclass identified by a particular dependency structu ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A class of estimators based on the dependency structure of a multivariate variable of interest and the survey design is defined. The dependency structure is the one described by the Bayesian networks. This class allows ratio type estimators as a subclass identified by a particular dependency structure. It will be shown by a MonteCarlo simulation how the adoption of the estimator corresponding to the population structure is more efficient than the others. It will also be underlined how this class adapts to the problem of integration of information from two surveys through the probability updating system of the Bayesian networks. KEY WORDS: Graphical models; probability update. 1.
The Supposed Competition between Theories of Human Causal Inference
"... Newsome ((2003). The debate between current versions of covariation and mechanism approaches to causal inference. Philosophical Psychology, 16, 87–107.) recently published a critical review of psychological theories of human causal inference. In that review, he characterized covariation and mechanis ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Newsome ((2003). The debate between current versions of covariation and mechanism approaches to causal inference. Philosophical Psychology, 16, 87–107.) recently published a critical review of psychological theories of human causal inference. In that review, he characterized covariation and mechanism theories, the two dominant theory types, as competing, and offered possible ways to integrate them. I argue that Newsome has misunderstood the theoretical landscape, and that covariation and mechanism theories do not directly conflict. Rather, they rely on distinct sets of reliable indicators of causation, and focus on different types of causation (type vs. token). There are certainly debates in the research field, but the theoretical landscape is not as fractured as Newsome suggests, and a potential unifying framework has already emerged using causal Bayes nets. Philosophical work on causal epistemology matters for psychologists, but not in the way Newsome suggests. 1.
A Bayesian Network for Outbreak Detection and Prediction
 In Proceedings of AAAI06
, 2006
"... Health care officials are increasingly concerned with knowing early whether an outbreak of a particular disease is unfolding. We often have daily counts of some variable that are indicative of the number of individuals in a given community becoming sick each day with a particular disease. By monitor ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Health care officials are increasingly concerned with knowing early whether an outbreak of a particular disease is unfolding. We often have daily counts of some variable that are indicative of the number of individuals in a given community becoming sick each day with a particular disease. By monitoring these daily counts we can possibly detect an outbreak in an early stage. A number of classical timeseries methods have been applied to outbreak detection based on monitoring daily counts of some variables. These classical methods only give us an alert as to whether there may be an outbreak. They do not predict properties of the outbreak such as its size, duration, and how far we are into the outbreak. Knowing the probable values of these variables can help guide us to a costeffective decision that maximizes expected utility. Bayesian networks have become one of the most prominent architectures for reasoning under uncertainty in artificial intelligence. We present an intelligent system, implemented using a Bayesian network, which not only detects an outbreak, but predicts its size and duration, and estimates how far we are into the outbreak. We show results of investigating the performance of the system using simulated outbreaks based on real outbreak data. These results indicate that the system shows promise of being able to predict properties of an outbreak.
Modular Bayesian Inference and Learning of Decision Networks as StandAlone Mechanisms of the MABEL Model: Implications for Visualization, Comprehension, and PolicyMaking
 In
, 2006
"... This paper describes a modular component of the MABEL model agents ’ cognitive inference mechanism. The probabilistic and probabilogic representation of the agents’ environment and state space is coupled with a Bayesian belief and decision network functionality, which in fact holds Markovian semipar ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper describes a modular component of the MABEL model agents ’ cognitive inference mechanism. The probabilistic and probabilogic representation of the agents’ environment and state space is coupled with a Bayesian belief and decision network functionality, which in fact holds Markovian semiparametric properties. Different approaches to modeling multiagent systems are described and analyzed; problem, model, and knowledgedriven approaches to agent inference and learning are emphasized. The notion of modularity in agentbased modeling components is conceptualized. The modular architecture of the decision inference mechanism allows for a flexible architectural design that can be either endogenous or exogenous to the agentbased simulation model. A suite of decision support tools for modular network inference in the MABEL model is showcased; the emphasis is on the component object model versus interoperability development interfaces. These tools provide the complex functionality of developing “models within models, ” thus simplifying the need for extensive research support and for a highend level of knowledge acquisition from the endusers ’ perspective. Finally, the paper assesses the validity of visual modeling interfaces for data and knowledgeacquisition mechanisms that can provide an essential link between an in vitro research model, and the complex realities that are observed and processed by decisionmakers, policymakers, communities, and stakeholders. Keywords: Agentbased model, MABEL, Bayesian belief networks, Bayesian decision networks, visualization, decisiontheoretic inference, policy making
Learning Genetic and Gene Bayesian Networks with Hidden Variables: Bilayer Verification algorithm
 Proceedings of the International Conference on Bioinformatics and Computational Biology (BIOCOMP), 2006
"... Abstract — To improve the recovery of genegene and markergene (eQTL) interaction networks from microarray and genetic data, we propose a new procedure for learning Bayesian networks. This algorithm, termed Bilayer Verification, starts with a userspecified leaf node, and then searches upstream to ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract — To improve the recovery of genegene and markergene (eQTL) interaction networks from microarray and genetic data, we propose a new procedure for learning Bayesian networks. This algorithm, termed Bilayer Verification, starts with a userspecified leaf node, and then searches upstream to locate portions of the biological interaction network that can be verified as unconfounded by hidden variables such as protein levels. We provide theoretical justification for this procedure, which learns Bayesian networks by recursively finding two levels of vstructures in the data. We discuss the specialization and efficiencies gained when exogenous variables (those with no parents) such as genetic markers can be included in the network. I.