Results 1 - 10
of
681
Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories
, 2004
"... Abstract — Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been te ..."
Abstract
-
Cited by 784 (16 self)
- Add to MetaCart
are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental
Object class recognition by unsupervised scale-invariant learning
- In CVPR
, 2003
"... We present a method to learn and recognize object class models from unlabeled and unsegmented cluttered scenes in a scale invariant manner. Objects are modeled as flexible constellations of parts. A probabilistic representation is used for all aspects of the object: shape, appearance, occlusion and ..."
Abstract
-
Cited by 1127 (50 self)
- Add to MetaCart
Bayesian manner to classify images. The flexible nature of the model is demonstrated by excellent results over a range of datasets including geometrically constrained classes (e.g. faces, cars) and flexible objects (such as animals). 1.
A Variational Bayesian Framework for Graphical Models
- In Advances in Neural Information Processing Systems 12
, 2000
"... This paper presents a novel practical framework for Bayesian model averaging and model selection in probabilistic graphical models. Our approach approximates full posterior distributions over model parameters and structures, as well as latent variables, in an analytical manner. These posteriors ..."
Abstract
-
Cited by 267 (7 self)
- Add to MetaCart
This paper presents a novel practical framework for Bayesian model averaging and model selection in probabilistic graphical models. Our approach approximates full posterior distributions over model parameters and structures, as well as latent variables, in an analytical manner. These posteriors
Bayesian Map Learning in Dynamic Environments
- In Neural Info. Proc. Systems (NIPS
"... We show how map learning can be formulated as inference in a graphical model, which allows us to handle changing environments in a natural manner. We describe several different approximation schemes for the problem, and illustrate some results on a simulated grid-world with doors that can open a ..."
Abstract
-
Cited by 163 (2 self)
- Add to MetaCart
We show how map learning can be formulated as inference in a graphical model, which allows us to handle changing environments in a natural manner. We describe several different approximation schemes for the problem, and illustrate some results on a simulated grid-world with doors that can open
Interpreting Bayesian Logic Programs
- PROCEEDINGS OF THE WORK-IN-PROGRESS TRACK AT THE 10TH INTERNATIONAL CONFERENCE ON INDUCTIVE LOGIC PROGRAMMING
, 2001
"... Various proposals for combining first order logic with Bayesian nets exist. We introduce the formalism of Bayesian logic programs, which is basically a simplification and reformulation of Ngo and Haddawys probabilistic logic programs. However, Bayesian logic programs are sufficiently powerful to ..."
Abstract
-
Cited by 126 (8 self)
- Add to MetaCart
to represent essentially the same knowledge in a more elegant manner. The elegance is illustrated by the fact that they can represent both Bayesian nets and definite clause programs (as in "pure" Prolog) and that their kernel in Prolog is actually an adaptation of an usual Prolog meta-interpreter.
Bayesian modeling of manner and path psychological data
, 2004
"... How people and computers can learn the meaning of words has long been a key ques-tion for both AI and cognitive science. It is hypothesized that a person acquires a bias to favor the characteristics of their native language, in order to aid word learning. Other hypothesized aids are syntactic bootst ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
these components work together is key to understanding word learning. Using cognitive psychology and computer science as a platform, this thesis attempts to tackle these questions using the classic example of manner and path verb bias. A series of cognitive psychology experiments was designed to gather information on
Inferring High-Level Behavior from Low-Level Sensors
, 2003
"... We present a method of learning a Bayesian model of a traveler moving through an urban environment. This technique is novel in that it simultaneously learns a unified model of the traveler's current mode of transportation as well as his most likely route, in an unsupervised manner. The model ..."
Abstract
-
Cited by 200 (17 self)
- Add to MetaCart
We present a method of learning a Bayesian model of a traveler moving through an urban environment. This technique is novel in that it simultaneously learns a unified model of the traveler's current mode of transportation as well as his most likely route, in an unsupervised manner
Inferring Parameters and Structure of Latent Variable Models by Variational Bayes
, 1999
"... Current methods for learning graphical models with latent variables and a fixed structure estimate optimal values for the model parameters. Whereas this approach usually produces overfitting and suboptimal generalization performance, carrying out the Bayesian program of computing the full posterior ..."
Abstract
-
Cited by 198 (1 self)
- Add to MetaCart
Current methods for learning graphical models with latent variables and a fixed structure estimate optimal values for the model parameters. Whereas this approach usually produces overfitting and suboptimal generalization performance, carrying out the Bayesian program of computing the full posterior
Network Control by Bayesian Broadcast
- IEEE Transactions on Information Theory
, 1987
"... Abstract-A transmission control strategy is described for slotted-ALOHA-type broadcast channels with ternary feedback. At each time slot, each station estimates the probability that n stations are ready to transmit a packet for each n, using Bayes ’ rule and the observed history of collisions, succe ..."
Abstract
-
Cited by 69 (0 self)
- Add to MetaCart
, successful transmissions, and holes (empty slots). A station transmits a packet in a probabilistic manner based on these estimates. Tbis strategy is called Bayesian broadcast. An elegant and very practical strategy-pseudo-Bayesian broadcast-is then derived by approximating the probability estimates with a
Non-Bayesian Social Learning
, 2011
"... We develop a dynamic model of opinion formation in social networks when the information required for learning a payoff-relevant parameter may not be at the disposal of any single agent. Individuals engage in communication with their neighbors in order to learn from their experiences. However, instea ..."
Abstract
-
Cited by 25 (6 self)
- Add to MetaCart
, instead of incorporating the views of their neighbors in a fully Bayesian manner, agents use a simple updating rule which linearly combines their personal experience and the views of their neighbors (even though the neighbors ’ views may be quite inaccurate). This non-Bayesian learning rule is motivated
Results 1 - 10
of
681