Results 1  10
of
18
Boredom: A Review
 Human Factors
, 1981
"... Edward Jenner, who discovered that it is possible to vaccinate against Small Pox using material from Cow Pox, is rightly the man who started the science of immunology. However, over the passage of time many of the details surrounding his astounding discovery have been lost or forgotten. Also, the en ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
Edward Jenner, who discovered that it is possible to vaccinate against Small Pox using material from Cow Pox, is rightly the man who started the science of immunology. However, over the passage of time many of the details surrounding his astounding discovery have been lost or forgotten. Also, the environment within which Jenner worked as a physician in the countryside, and the state of the art of medicine and society are difficult to appreciate today. It is important to recall that people were still being bled at the time, to relieve the presence of evil humors. Accordingly, this review details Jenner’s discovery and attempts to place it in historical context. Also, the vaccine that Jenner used, which decreased the prevalence of Small Pox worldwide in his own time, and later was used to eradicate Small Pox altogether, is discussed in light of recent data.
Xiaoyong, A Review on
 Hybrid Storage, Microcomputer Applications, Vol.29, No.2
"... Epidemiology and prevention of hepatitis B virus infection in China ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
Epidemiology and prevention of hepatitis B virus infection in China
Free Energy, Value, and Attractors
, 2012
"... It has been suggested recently that action and perception can be understood as minimising the free energy of sensory samples. This ensures that agents sample the environment to maximise the evidence for their model of the world, such that exchanges with the environment are predictable and adaptive ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
It has been suggested recently that action and perception can be understood as minimising the free energy of sensory samples. This ensures that agents sample the environment to maximise the evidence for their model of the world, such that exchanges with the environment are predictable and adaptive. However, the free energy account does not invoke reward or costfunctions from reinforcementlearning and optimal control theory. We therefore ask whether reward is necessary to explain adaptive behaviour. The free energy formulation uses ideas from statistical physics to explain action in terms of minimising sensory surprise. Conversely, reinforcementlearning has its roots in behaviourism and engineering and assumes that agents optimise a policy to maximise future reward. This paper tries to connect the two formulations and concludes that optimal policies correspond to empirical priors on the trajectories of hidden environmental states, which compel agents to seek out the (valuable) states they expect to encounter.
Reviewed by:
, 2010
"... We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that ne ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We suggested recently that attention can be understood as inferring the level of uncertainty or precision during hierarchical perception. In this paper, we try to substantiate this claim using neuronal simulations of directed spatial attention and biased competition. These simulations assume that neuronal activity encodes a probabilistic representation of the world that optimizes freeenergy in a Bayesian fashion. Because freeenergy bounds surprise or the (negative) logevidence for internal models of the world, this optimization can be regarded as evidence accumulation or (generalized) predictive coding. Crucially, both predictions about the state of the world generating sensory data and the precision of those data have to be optimized. Here, we show that if the precision depends on the states, one can explain many aspects of attention. We illustrate this in the context of the Posner paradigm, using the simulations to generate both psychophysical and electrophysiological responses. These simulated responses are consistent with attentional bias or gating, competition for attentional resources, attentional capture and associated speedaccuracy tradeoffs. Furthermore, if we present both attended and nonattended stimuli simultaneously, biased competition for neuronal representation emerges as a principled and straightforward property of Bayesoptimal perception.
1 Series Expansion Approximations of Brownian Motion for NonLinear Kalman Filtering of Diffusion Processes
"... Abstract—In this paper, we describe a novel application of sigmapoint methods to continuousdiscrete filtering. The nonlinear continuousdiscrete filtering problem is often computationally intractable to solve. Assumed density filtering methods attempt to match statistics of the filtering distribut ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we describe a novel application of sigmapoint methods to continuousdiscrete filtering. The nonlinear continuousdiscrete filtering problem is often computationally intractable to solve. Assumed density filtering methods attempt to match statistics of the filtering distribution to some set of more tractable probability distributions. Filters such as these are usually decompose the problem into two subproblems. The first of these is a prediction step, in which one uses the known dynamics of the signal to predict its state at time t + 1 given observations up to time t. In the second step, one updates the prediction upon arrival of the observation at time t + 1. The aim of this paper is to describe a novel method that improves the prediction step. We decompose the Brownian motion driving the signal in a generalised Fourier series, which is truncated after a number of terms. This approximation to Brownian motion can be described using a relatively small number of Fourier coefficients, and allows us to compute statistics of the filtering distribution with a single application of a sigmapoint method. Assumed density filters that exist in the literature usually rely on discretisation of the signal dynamics followed by iterated application of a sigma point transform (or a limiting case thereof). Iterating the transform in this manner can lead to loss of information about the filtering distribution in highly nonlinear settings. We demonstrate that our method is better equipped to cope with such problems. I.
A JOURNAL OF NEUROLOGY OCCASIONAL PAPER Circular inferences in schizophrenia
"... A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhi ..."
Abstract
 Add to MetaCart
A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhibitory imbalance with psychotic symptoms in schizophrenia. Considering that the brain constructs hierarchical causal models of the external world, we show that the failure to maintain the excitatory to inhibitory balance results in hallucinations as well as in the formation and subsequent consolidation of delusional beliefs. Indeed, the consequence of excitatory to inhibitory imbalance in a hierarchical neural network is equated to a pathological form of causal inference called ‘circular belief propagation’. In circular belief propagation, bottomup sensory information and topdown predictions are reverberated, i.e. prior beliefs are misinterpreted as sensory observations and vice versa. As a result, these predictions are counted multiple times. Circular inference explains the emergence of erroneous percepts, the patient’s overconfidence when facing probabilistic choices, the learning of ‘unshakable ’ causal relationships between unrelated events and a paradoxical immunity to perceptual illusions, which are all known to be associated with schizophrenia.
DEM: A variational treatment of dynamic systems
, 2008
"... This paper presents a variational treatment of dynamic models that furnishes timedependent conditional densities on the path or trajectory of a system's states and the timeindependent densities of its parameters. These are obtained by maximising a variational action with respect to conditiona ..."
Abstract
 Add to MetaCart
(Show Context)
This paper presents a variational treatment of dynamic models that furnishes timedependent conditional densities on the path or trajectory of a system's states and the timeindependent densities of its parameters. These are obtained by maximising a variational action with respect to conditional densities, under a fixedform assumption about their form. The action or pathintegral of freeenergy represents a lower bound on the model's logevidence or marginal likelihood required for model selection and averaging. This approach rests on formulating the optimisation dynamically, in generalised coordinates of motion. The resulting scheme can be used for online Bayesian inversion of nonlinear dynamic causal models and is shown to outperform existing approaches, such as Kalman and particle filtering. Furthermore, it provides for dual and triple inferences on a system's states, parameters and hyperparameters using exactly the same principles. We refer to this approach as dynamic expectation maximisation (DEM).
doi:10.1155/2010/621670 Research Article Generalised Filtering
, 2010
"... Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We describe a Bayesian filtering scheme for nonlinear statespace models in continuous time. This scheme is called Generalised Filtering and furnis ..."
Abstract
 Add to MetaCart
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We describe a Bayesian filtering scheme for nonlinear statespace models in continuous time. This scheme is called Generalised Filtering and furnishes posterior �conditional � densities on hidden states and unknown parameters generating observed data. Crucially, the scheme operates online, assimilating data to optimize the conditional density on timevarying states and timeinvariant parameters. In contrast to Kalman and Particle smoothing, Generalised Filtering does not require a backwards pass. In contrast to variational schemes, it does not assume conditional independence between the states and parameters. Generalised Filtering optimises the conditional density with respect to a freeenergy bound on the model’s logevidence. This optimisation uses the generalised motion of hidden states and parameters, under the prior assumption that the motion of the parameters is small. We describe the scheme, present comparative evaluations with a fixedform variational version, and conclude with an illustrative application to a nonlinear statespace model of brain imaging timeseries. 1.
Reviewed by:
, 2011
"... In this paper, we pursue recent observations that, through selective dendritic filtering, single neurons respond to specific sequences of presynaptic inputs. We try to provide a principled and mechanistic account of this selectivity by applying a recent freeenergy principle to a dendrite that is im ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, we pursue recent observations that, through selective dendritic filtering, single neurons respond to specific sequences of presynaptic inputs. We try to provide a principled and mechanistic account of this selectivity by applying a recent freeenergy principle to a dendrite that is immersed in its neuropil or environment. We assume that neurons selforganize to minimize a variational freeenergy bound on the selfinformation or surprise of presynaptic inputs that are sampled. We model this as a selective pruning of dendritic spines that are expressed on a dendritic branch. This pruning occurs when postsynaptic gain falls below a threshold. Crucially, postsynaptic gain is itself optimized with respect to free energy. Pruning suppresses free energy as the dendrite selects presynaptic signals that conform to its expectations, specified by a generative model implicit in its intracellular kinetics. Not only does this provide a principled account of how neurons organize and selectively sample the myriad of potential presynaptic inputs they are exposed to, but it also connects the optimization of elemental neuronal (dendritic) processing to generic (surprise or evidencebased) schemes in statistics and machine learning, such as Bayesian model selection and automatic relevance determination.