Results 1  10
of
203
Hierarchical Bayesian Inference in the Visual Cortex
, 2002
"... this paper, we propose a Bayesian theory of hierarchical cortical computation based both on (a) the mathematical and computational ideas of computer vision and pattern the ory and on (b) recent neurophysiological experimental evidence. We ,2 have proposed that Grenander's pattern theory 3 coul ..."
Abstract

Cited by 296 (2 self)
 Add to MetaCart
this paper, we propose a Bayesian theory of hierarchical cortical computation based both on (a) the mathematical and computational ideas of computer vision and pattern the ory and on (b) recent neurophysiological experimental evidence. We ,2 have proposed that Grenander's pattern theory 3 could potentially model the brain as a generafive model in such a way that feedback serves to disambiguate and 'explain away' the earlier representa tion. The Helmholtz machine 4, 5 was an excellent step towards approximating this proposal, with feedback implementing priors. Its development, however, was rather limited, dealing only with binary images. Moreover, its feedback mechanisms were engaged only during the learning of the feedforward connections but not during perceptual inference, though the Gibbs sampling process for inference can potentially be interpreted as topdown feedback disambiguating low level representations? Rao and Ballard's predictive coding/Kalman filter model 6 did integrate generafive feedback in the perceptual inference process, but it was primarily a linear model and thus severely limited in practical utility. The datadriven Markov Chain Monte Carlo approach of Zhu and colleagues 7, 8 might be the most successful recent application of this proposal in solving real and difficult computer vision problems using generafive models, though its connection to the visual cortex has not been explored. Here, we bring in a powerful and widely applicable paradigm from artificial intelligence and computer vision to propose some new ideas about the algorithms of visual cortical process ing and the nature of representations in the visual cortex. We will review some of our and others' neurophysiological experimental data to lend support to these ideas
Motion illusions as optimal percepts
 TO APPEAR: NATURE: NEUROSCIENCE, JUNE 2002.
, 2002
"... The pattern of local image velocities on the retina encodes important environmental information. Psychophysical evidence reveals that while humans are generally able to extract this information, they can easily be deceived into seeing incorrect velocities. We show that these ’illusions’ arise natura ..."
Abstract

Cited by 118 (6 self)
 Add to MetaCart
The pattern of local image velocities on the retina encodes important environmental information. Psychophysical evidence reveals that while humans are generally able to extract this information, they can easily be deceived into seeing incorrect velocities. We show that these ’illusions’ arise naturally in a system that attempts to estimate local image velocity. We formulate a model for visual motion perception using standard estimation theory, under the assumptions that (a) there is noise in the initial measurements, and (b) slower motions are more likely to occur than faster ones. A specific instantiation of such a velocity estimator accounts for a wide variety of psychophysical phenomena.
Bayesian computation in recurrent neural circuits
 Neural Computation
, 2004
"... A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implem ..."
Abstract

Cited by 90 (4 self)
 Add to MetaCart
(Show Context)
A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model. We illustrate the approach using an orientation discrimination task and a visual motion detection task. In the case of orientation discrimination, we show that the model network can infer the posterior distribution over orientations and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and correctly computes the posterior probabilities over motion direction and position. When used to solve the wellknown random dots motion discrimination task, the model generates responses that mimic the activities of evidenceaccumulating neurons in cortical areas LIP and FEF. The framework introduced in the paper posits a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world. 1 1
Optimal predictions in everyday cognition
 Psychological Science
, 2006
"... Everyday predictions 1 Everyday predictions 2 Human perception and memory are often explained as optimal statistical inferences, informed by accurate prior probabilities. In contrast, cognitive judgments are usually viewed as following errorprone heuristics, insensitive to priors. We examined the o ..."
Abstract

Cited by 75 (19 self)
 Add to MetaCart
(Show Context)
Everyday predictions 1 Everyday predictions 2 Human perception and memory are often explained as optimal statistical inferences, informed by accurate prior probabilities. In contrast, cognitive judgments are usually viewed as following errorprone heuristics, insensitive to priors. We examined the optimality of human cognition in a more realistic context than typical laboratory studies, asking people to make predictions about the duration or extent of everyday phenomena such as human life spans and the boxoffice take of movies. Our results suggest that everyday cognitive judgments follow the same optimal statistical principles as perception and memory, and reveal a close correspondence between people’s implicit probabilistic models and the statistics of the world. Optimal predictions in everyday cognition Everyday predictions 3 If you were assessing the prospects of a 60yearold man, how much longer would
Multiresolution image classification by hierarchical modeling with two dimensional hidden Markov models
 IEEE TRANS. INFORMATION THEORY
, 2000
"... This paper treats a multiresolution hidden Markov model for classifying images. Each image is represented by feature vectors at several resolutions, which are statistically dependent as modeled by the underlying state process, a multiscale Markov mesh. Unknowns in the model are estimated by maximum ..."
Abstract

Cited by 71 (9 self)
 Add to MetaCart
(Show Context)
This paper treats a multiresolution hidden Markov model for classifying images. Each image is represented by feature vectors at several resolutions, which are statistically dependent as modeled by the underlying state process, a multiscale Markov mesh. Unknowns in the model are estimated by maximum likelihood, in particular by employing the expectationmaximization algorithm. An image is classified by finding the optimal set of states with maximum a posteriori probability. States are then mapped into classes. The multiresolution model enables multiscale information about context to be incorporated into classification. Suboptimal algorithms based on the model provide progressive classification that is much faster than the algorithm based on singleresolution hidden Markov models.
Slow and Smooth: a Bayesian theory for the combination of of local motion signals in human vision
, 1998
"... In order to estimate the motion of an object, the visual system needs to combine multiple local measurements, each of which carries some degree of ambiguity. We present a model of motion perception whereby measurements from dierent image regions are combined according to a Bayesian estimator: the ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
(Show Context)
In order to estimate the motion of an object, the visual system needs to combine multiple local measurements, each of which carries some degree of ambiguity. We present a model of motion perception whereby measurements from dierent image regions are combined according to a Bayesian estimator: the estimated motion maximizes the posterior probability assuming a prior favoring slow and smooth velocities. In reviewing a large number of previously published phenomena we nd that the Bayesian estimator predicts a wide range of psychophysical results. This suggests that the seemingly complex set of illusions arise from a single computational strategy that is optimal under reasonable assumptions. 1 Introduction Estimating motion in scenes containing multiple, complex motions remains a dicult problem for computer vision systems, yet is performed eortlessly by human observers. Motion analysis in such scenes imposes conicting demands on the design of a vision system (Braddick, 1993)....
Probabilistic kernels for the classification of autoregressive visual processes
 In IEEE Conference on Computer Vision and Pattern Recognition
, 2005
"... We present a framework for the classification of visual processes that are best modeled with spatiotemporal autoregressive models. The new framework combines the modeling power of a family of models known as dynamic textures and the generalization guarantees, for classification, of the support vect ..."
Abstract

Cited by 66 (17 self)
 Add to MetaCart
(Show Context)
We present a framework for the classification of visual processes that are best modeled with spatiotemporal autoregressive models. The new framework combines the modeling power of a family of models known as dynamic textures and the generalization guarantees, for classification, of the support vector machine classifier. This combination is achieved by the derivation of a new probabilistic kernel based on the KullbackLeibler divergence (KL) between GaussMarkov processes. In particular, we derive the KLkernel for dynamic textures in both 1) the image space, which describes both the motion and appearance components of the spatiotemporal process, and 2) the hidden state space, which describes the temporal component alone. Together, the two kernels cover a large variety of video classification problems, including the cases where classes can differ in both appearance and motion and the cases where appearance is similar for all classes and only motion is discriminant. Experimental evaluation on two databases shows that the new classifier achieves superior performance over existing solutions. 1.