• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 6,218
Next 10 →

A Bayesian method for the induction of probabilistic networks from data

by Gregory F. Cooper, EDWARD HERSKOVITS - MACHINE LEARNING , 1992
"... This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabili ..."
Abstract - Cited by 1400 (31 self) - Add to MetaCart
This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction

The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity

by Clay B. Holroyd, Michael G. H. Coles - PSYCHOLOGICAL REVIEW 109:679–709 , 2002
"... The authors present a unified account of 2 neural systems concerned with the development and expression of adaptive behaviors: a mesencephalic dopamine system for reinforcement learning and a “generic ” error-processing system associated with the anterior cingulate cortex. The existence of the error ..."
Abstract - Cited by 430 (20 self) - Add to MetaCart
of the error-processing system has been inferred from the error-related negativity (ERN), a component of the event-related brain potential elicited when human participants commit errors in reaction-time tasks. The authors propose that the ERN is generated when a negative reinforcement learning signal

Discovery of Inference Rules for Question Answering

by Dekang Lin, Patrick Pantel - Natural Language Engineering , 2001
"... One of the main challenges in question-answering is the potential mismatch between the expressions in questions and the expressions in texts. While humans appear to use inference rules such as “X writes Y ” implies “X is the author of Y ” in answering questions, such rules are generally unavailable ..."
Abstract - Cited by 309 (7 self) - Add to MetaCart
One of the main challenges in question-answering is the potential mismatch between the expressions in questions and the expressions in texts. While humans appear to use inference rules such as “X writes Y ” implies “X is the author of Y ” in answering questions, such rules are generally unavailable

Hierarchical Bayesian Inference in the Visual Cortex

by Tai Sing Lee, David Mumford , 2002
"... this paper, we propose a Bayesian theory of hierarchical cortical computation based both on (a) the mathematical and computational ideas of computer vision and pattern the- ory and on (b) recent neurophysiological experimental evidence. We ,2 have proposed that Grenander's pattern theory 3 coul ..."
Abstract - Cited by 300 (2 self) - Add to MetaCart
, however, was rather limited, dealing only with binary images. Moreover, its feedback mechanisms were engaged only during the learning of the feedforward connections but not during perceptual inference, though the Gibbs sampling process for inference can potentially be interpreted as top-down feedback

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks

by Arnaud Doucet , Nando de Freitas , Kevin Murphy , Stuart Russell
"... Particle filters (PFs) are powerful sampling-based inference/learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probability distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names as “conde ..."
Abstract - Cited by 348 (11 self) - Add to MetaCart
Particle filters (PFs) are powerful sampling-based inference/learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probability distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names

Robust Higher Order Potentials for Enforcing Label Consistency

by P. Kohli, L. Ladický, P. H. S. Torr , 2009
"... This paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner. Our method is based on higher order conditional random fields and uses potentials defined on sets of pixels (image segments) generated using unsupervised segmentation ..."
Abstract - Cited by 259 (34 self) - Add to MetaCart
n Potts model recently proposed by Kohli et al. We prove that the optimal swap and expansion moves for energy functions composed of these potentials can be computed by solving a stmincut problem. This enables the use of powerful graph cut based move making algorithms for performing inference

Fields of experts: A framework for learning image priors

by Stefan Roth, Michael J. Black - In CVPR , 2005
"... We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov Random Field (MRF) models by learning potential functions over extended pixel neighborhood ..."
Abstract - Cited by 292 (4 self) - Add to MetaCart
We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov Random Field (MRF) models by learning potential functions over extended pixel

Dependency networks for inference, collaborative filtering, and data visualization

by David Heckerman, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, Carl Kadie - Journal of Machine Learning Research
"... We describe a graphical model for probabilistic relationships|an alternative tothe Bayesian network|called a dependency network. The graph of a dependency network, unlike aBayesian network, is potentially cyclic. The probability component of a dependency network, like aBayesian network, is a set of ..."
Abstract - Cited by 208 (12 self) - Add to MetaCart
We describe a graphical model for probabilistic relationships|an alternative tothe Bayesian network|called a dependency network. The graph of a dependency network, unlike aBayesian network, is potentially cyclic. The probability component of a dependency network, like aBayesian network, is a set

Word Learning as Bayesian Inference

by Fei Xu, Joshua B. Tenenbaum - In Proceedings of the 22nd Annual Conference of the Cognitive Science Society , 2000
"... The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word’s referents, by making rational inductive inferences that integrate pr ..."
Abstract - Cited by 175 (33 self) - Add to MetaCart
The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word’s referents, by making rational inductive inferences that integrate

THE VOCABULARY OF BRAIN POTENTIALS: INFERRING COGNITIVE EVENTS FROM BRAIN POTENTIALS IN OPERATIONAL SETTINGS

by Combinirig Semiannual, Rtfronnctn Ry , 1976
"... in i ■ ' ^ r lll^lWllll^MilMiMWMtl^ • Ur- ..."
Abstract - Add to MetaCart
in i ■ ' ^ r lll^lWllll^MilMiMWMtl^ • Ur-
Next 10 →
Results 1 - 10 of 6,218
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University