Results 1 
7 of
7
Free Energy, Value, and Attractors
, 2012
"... It has been suggested recently that action and perception can be understood as minimising the free energy of sensory samples. This ensures that agents sample the environment to maximise the evidence for their model of the world, such that exchanges with the environment are predictable and adaptive ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
It has been suggested recently that action and perception can be understood as minimising the free energy of sensory samples. This ensures that agents sample the environment to maximise the evidence for their model of the world, such that exchanges with the environment are predictable and adaptive. However, the free energy account does not invoke reward or costfunctions from reinforcementlearning and optimal control theory. We therefore ask whether reward is necessary to explain adaptive behaviour. The free energy formulation uses ideas from statistical physics to explain action in terms of minimising sensory surprise. Conversely, reinforcementlearning has its roots in behaviourism and engineering and assumes that agents optimise a policy to maximise future reward. This paper tries to connect the two formulations and concludes that optimal policies correspond to empirical priors on the trajectories of hidden environmental states, which compel agents to seek out the (valuable) states they expect to encounter.
Reviewed by:
, 2010
"... We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that ne ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that neuronal activity encodes a probabilistic representation of the world that optimizes freeenergy in a Bayesian fashion. Because freeenergy bounds surprise or the (negative) logevidence for internal models of the world, this optimization can be regarded as evidence accumulation or (generalized) predictive coding. Crucially, both predictions about the state of the world generating sensory data and the precision of those data have to be optimized. Here, we show that if the precision depends on the states, one can explain many aspects of attention. We illustrate this in the context of the Posner paradigm, using the simulations to generate both psychophysical and electrophysiological responses. These simulated responses are consistent with attentional bias or gating, competition for attentional resources, attentional capture and associated speedaccuracy tradeoffs. Furthermore, if we present both attended and nonattended stimuli simultaneously, biased competition for neuronal representation emerges as a principled and straightforward property of Bayesoptimal perception.
DEM: A variational treatment of dynamic systems
, 2008
"... This paper presents a variational treatment of dynamic models that furnishes timedependent conditional densities on the path or trajectory of a system's states and the timeindependent densities of its parameters. These are obtained by maximising a variational action with respect to conditional den ..."
Abstract
 Add to MetaCart
This paper presents a variational treatment of dynamic models that furnishes timedependent conditional densities on the path or trajectory of a system's states and the timeindependent densities of its parameters. These are obtained by maximising a variational action with respect to conditional densities, under a fixedform assumption about their form. The action or pathintegral of freeenergy represents a lower bound on the model's logevidence or marginal likelihood required for model selection and averaging. This approach rests on formulating the optimisation dynamically, in generalised coordinates of motion. The resulting scheme can be used for online Bayesian inversion of nonlinear dynamic causal models and is shown to outperform existing approaches, such as Kalman and particle filtering. Furthermore, it provides for dual and triple inferences on a system's states, parameters and hyperparameters using exactly the same principles. We refer to this approach as dynamic expectation maximisation (DEM).
doi:10.1155/2010/621670 Research Article Generalised Filtering
, 2010
"... Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We describe a Bayesian filtering scheme for nonlinear statespace models in continuous time. This scheme is called Generalised Filtering and furnis ..."
Abstract
 Add to MetaCart
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We describe a Bayesian filtering scheme for nonlinear statespace models in continuous time. This scheme is called Generalised Filtering and furnishes posterior �conditional � densities on hidden states and unknown parameters generating observed data. Crucially, the scheme operates online, assimilating data to optimize the conditional density on timevarying states and timeinvariant parameters. In contrast to Kalman and Particle smoothing, Generalised Filtering does not require a backwards pass. In contrast to variational schemes, it does not assume conditional independence between the states and parameters. Generalised Filtering optimises the conditional density with respect to a freeenergy bound on the model’s logevidence. This optimisation uses the generalised motion of hidden states and parameters, under the prior assumption that the motion of the parameters is small. We describe the scheme, present comparative evaluations with a fixedform variational version, and conclude with an illustrative application to a nonlinear statespace model of brain imaging timeseries. 1.
Dynamical Causal Modelling, Hierarchical dynamical models
"... This paper reviews a simple solution to the continuousdiscrete Bayesian nonlinear state estimation problem that has been proposed recently. The key ideas are analytic noise processes, variational Bayes, and the formulation of the problem in terms of generalized coordinates of motion. Some of the al ..."
Abstract
 Add to MetaCart
This paper reviews a simple solution to the continuousdiscrete Bayesian nonlinear state estimation problem that has been proposed recently. The key ideas are analytic noise processes, variational Bayes, and the formulation of the problem in terms of generalized coordinates of motion. Some of the algorithms, specifically dynamic expectation maximization and variational filtering, have been shown to outperform existing approaches like extended Kalman filtering and particle filtering. A pedagogical review of the theoretical formulation is presented, with an emphasis on concepts that are not as widely known in the filtering literature. We illustrate the appliction of these concepts using a numerical example.
Reviewed by:
, 2011
"... In this paper, we pursue recent observations that, through selective dendritic filtering, single neurons respond to specific sequences of presynaptic inputs. We try to provide a principled and mechanistic account of this selectivity by applying a recent freeenergy principle to a dendrite that is im ..."
Abstract
 Add to MetaCart
In this paper, we pursue recent observations that, through selective dendritic filtering, single neurons respond to specific sequences of presynaptic inputs. We try to provide a principled and mechanistic account of this selectivity by applying a recent freeenergy principle to a dendrite that is immersed in its neuropil or environment. We assume that neurons selforganize to minimize a variational freeenergy bound on the selfinformation or surprise of presynaptic inputs that are sampled. We model this as a selective pruning of dendritic spines that are expressed on a dendritic branch. This pruning occurs when postsynaptic gain falls below a threshold. Crucially, postsynaptic gain is itself optimized with respect to free energy. Pruning suppresses free energy as the dendrite selects presynaptic signals that conform to its expectations, specified by a generative model implicit in its intracellular kinetics. Not only does this provide a principled account of how neurons organize and selectively sample the myriad of potential presynaptic inputs they are exposed to, but it also connects the optimization of elemental neuronal (dendritic) processing to generic (surprise or evidencebased) schemes in statistics and machine learning, such as Bayesian model selection and automatic relevance determination.
1 Series Expansion Approximations of Brownian Motion for NonLinear Kalman Filtering of Diffusion Processes
"... Abstract—In this paper, we describe a novel application of sigmapoint methods to continuousdiscrete filtering. The nonlinear continuousdiscrete filtering problem is often computationally intractable to solve. Assumed density filtering methods attempt to match statistics of the filtering distribut ..."
Abstract
 Add to MetaCart
Abstract—In this paper, we describe a novel application of sigmapoint methods to continuousdiscrete filtering. The nonlinear continuousdiscrete filtering problem is often computationally intractable to solve. Assumed density filtering methods attempt to match statistics of the filtering distribution to some set of more tractable probability distributions. Filters such as these are usually decompose the problem into two subproblems. The first of these is a prediction step, in which one uses the known dynamics of the signal to predict its state at time t + 1 given observations up to time t. In the second step, one updates the prediction upon arrival of the observation at time t + 1. The aim of this paper is to describe a novel method that improves the prediction step. We decompose the Brownian motion driving the signal in a generalised Fourier series, which is truncated after a number of terms. This approximation to Brownian motion can be described using a relatively small number of Fourier coefficients, and allows us to compute statistics of the filtering distribution with a single application of a sigmapoint method. Assumed density filters that exist in the literature usually rely on discretisation of the signal dynamics followed by iterated application of a sigma point transform (or a limiting case thereof). Iterating the transform in this manner can lead to loss of information about the filtering distribution in highly nonlinear settings. We demonstrate that our method is better equipped to cope with such problems. I.