Results 1  10
of
21
W (2013) Bayesian computation emerges in generic cortical microcircuits through spiketimingdependent plasticity. PLoS Comput Biol 9:e1003037. CrossRef Medline Nikolic
 PLoS Biol 7:e1000260. CrossRef Medline Ohl FW, Scheich H, FreemanWJ
, 2009
"... The principles by which networks of neurons compute, and how spiketiming dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winnertakeall (WTA) circuits, where pyramidal neurons inhibit each other v ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
The principles by which networks of neurons compute, and how spiketiming dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winnertakeall (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activitydependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to highdimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trialtotrial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons
, 2011
"... An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operati ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (‘‘explaining away’’) and with undirected loops, that occur in many realworld tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trialtotrial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints
"... Recent spiking network models of Bayesian inference and unsupervised learning frequently assume either inputs to arrive in a special format or employ complex computations in neuronal activation functions and synaptic plasticity rules. Here we show in a rigorous mathematical treatment how homeostatic ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Recent spiking network models of Bayesian inference and unsupervised learning frequently assume either inputs to arrive in a special format or employ complex computations in neuronal activation functions and synaptic plasticity rules. Here we show in a rigorous mathematical treatment how homeostatic processes, which have previously received little attention in this context, can overcome common theoretical limitations and facilitate the neural implementation and performance of existing models. In particular, we show that homeostatic plasticity can be understood as the enforcement of a ’balancing ’ posterior constraint during probabilistic inference and learning with Expectation Maximization. We link homeostatic dynamics to the theory of variational inference, and show that nontrivial terms, which typically appear during probabilistic inference in a large class of models, drop out. We demonstrate the feasibility of our approach in a spiking WinnerTakeAll architecture of Bayesian inference and learning. Finally, we sketch how the mathematical framework can be extended to richer recurrent network architectures. Altogether, our theory provides a novel perspective on the interplay of homeostatic processes and synaptic plasticity in cortical microcircuits, and points to an essential role of homeostasis during inference and learning in spiking networks. 1
Spatiotemporal Spike Pattern Classification in Neuromorphic Systems
"... Abstract. Spikebased neuromorphic electronic architectures offer an attractive solution for implementing compact efficient sensorymotor neural processing systems for robotic applications. Such systems typically comprise eventbased sensors and multineuron chips that encode, transmit, and process ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Spikebased neuromorphic electronic architectures offer an attractive solution for implementing compact efficient sensorymotor neural processing systems for robotic applications. Such systems typically comprise eventbased sensors and multineuron chips that encode, transmit, and process signals using spikes. For robotic applications, the ability to sustain realtime interactions with the environment is an essential requirement. So these neuromorphic systems need to process sensory signals continuously and instantaneously, as the input data arrives, classify the spatiotemporal information contained in the data, and produce appropriate motor outputs in realtime. In this paper we evaluate the computational approaches that have been proposed for classifying spatiotemporal sequences of spiketrains, derive the main principles and the key components that are required to build a neuromorphic system that works in robotic application scenarios, with the constraints imposed by the biologically realistic hardware implementation, and present possible systemlevel solutions. 1
EFFICIENT AND SCALABLE BIOLOGICALLY PLAUSIBLE SPIKING NEURAL NETWORKS WITH LEARNING APPLIED TO VISION
, 2010
"... by ..."
(Show Context)
An Online Algorithm for Learning Selectivity to Mixture Means
, 2014
"... We develop a biologicallyplausible learning rule called Triplet BCM that provably converges to the class means of general mixture models. This rule generalizes the classical BCM neural rule, and provides a novel interpretation of classical BCM as performing a kind of tensor decomposition. It achie ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We develop a biologicallyplausible learning rule called Triplet BCM that provably converges to the class means of general mixture models. This rule generalizes the classical BCM neural rule, and provides a novel interpretation of classical BCM as performing a kind of tensor decomposition. It achieves a substantial generalization over classical BCM by incorporating triplets of samples from the mixtures, which provides a novel information processing interpretation to spiketimingdependent plasticity. We provide complete proofs of convergence of this learning rule, and an extended discussion of the connection between BCM and tensor learning. Spectral tensor methods are emerging themes in machine learning, but they remain global rather than “online. ” While incremental (online) learning can be useful in many practical applications, it is essential for biological learning. ∗now at Google Inc. 1 ar
Home Search Collections Journals About Contact us My IOPscience IOP PUBLISHING NANOTECHNOLOGY
, 2013
"... Integration of nanoscale memristor synapses in neuromorphic computing architectures This article has been downloaded from IOPscience. Please scroll down to see the full text article. ..."
Abstract
 Add to MetaCart
(Show Context)
Integration of nanoscale memristor synapses in neuromorphic computing architectures This article has been downloaded from IOPscience. Please scroll down to see the full text article.
ARTICLE Communicated by Mark van Rossum Emergence of Optimal Decoding of Population Codes Through STDP
"... The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weigh ..."
Abstract
 Add to MetaCart
(Show Context)
The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectationmaximization for creating internal generative models for hidden causes of highdimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore,we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli. 1
DEPARTEMENT DE GENIE ELECTRIQUE ET DE GENIE INFORMATIQUE
"... à la Faculté des études supérieures et postdoctorales de l'Université Laval dans le cadre du programme de doctorat en génie électrique pour l'obtention du grade de Philosophiae Doctor (Ph.D.) ..."
Abstract
 Add to MetaCart
(Show Context)
à la Faculté des études supérieures et postdoctorales de l'Université Laval dans le cadre du programme de doctorat en génie électrique pour l'obtention du grade de Philosophiae Doctor (Ph.D.)