Results 1  10
of
287
Generalized IntegrateandFire Models of Neuronal Activity Approximate Spike Trains of a . . .
"... We demonstrate that singlevariable integrateandfire models can quantitatively capture the dynamics of a physiologicallydetailed model for fastspiking cortical neurons. Through a systematic set of approximations, we reduce the conductance based model to two variants of integrateandfire mode ..."
Abstract

Cited by 58 (14 self)
 Add to MetaCart
We demonstrate that singlevariable integrateandfire models can quantitatively capture the dynamics of a physiologicallydetailed model for fastspiking cortical neurons. Through a systematic set of approximations, we reduce the conductance based model to two variants of integrateandfire models. In the first variant (nonlinear integrateandfire model), parameters depend on the instantaneous membrane potential whereas in the second variant, they depend on the time elapsed since the last spike (Spike Response Model). The direct reduction links features of the simple models to biophysical features of the full conductance based model. To quantitatively
Sparse Coding In The Primate Cortex
, 2002
"... INTRODUCTION Brain function can be seen as computation, i.e. the manipulation of information necessary for survival. Computation itself is an abstract process but it must be performed or implemented in a physical system. Any physical computing system, be it an electronic computer or a biological sy ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
INTRODUCTION Brain function can be seen as computation, i.e. the manipulation of information necessary for survival. Computation itself is an abstract process but it must be performed or implemented in a physical system. Any physical computing system, be it an electronic computer or a biological system consisting of neurons, must use some form of physical representation for the pieces of information that it processes. Computations are implemented by the transformations of these physical representations of information. The brain receives information via the sensory channels and must eventually generate an appropriate motor output. But before we can even study the transformations that are involved, we need at least some fundamental understanding of the internal representation that these transformations operate on. Neurons represent and communicate information mainly by generating (or `firing') a sequence of electrical impulses. Electrophysiological techniques exist for the recor
Synergies Between Intrinsic and Synaptic Plasticity Mechanisms
, 2007
"... We propose a model of intrinsic plasticity for a continuous activation model neuron based on information theory. We then show how intrinsic and synaptic plasticity mechanisms interact and allow the neuron to discover heavytailed directions in the input. We also demonstrate that intrinsic plasticity ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
We propose a model of intrinsic plasticity for a continuous activation model neuron based on information theory. We then show how intrinsic and synaptic plasticity mechanisms interact and allow the neuron to discover heavytailed directions in the input. We also demonstrate that intrinsic plasticity may be an alternative explanation for the sliding threshold postulated in the BCM theory of synaptic plasticity. We present a theoretical analysis of the interaction of intrinsic plasticity with different Hebbian learning rules for the case of clustered inputs. Finally, we perform experiments on the “bars” problem, a popular nonlinear independent component analysis problem.
The jackknifea review
 Biometrika
, 1974
"... Interleukin (IL)33 is a new member of the IL1 superfamily of cytokines that is expressed by mainly stromal cells, such as epithelial and endothelial cells, and its expression is upregulated following proinflammatory stimulation. IL33 can function both as a traditional cytokine and as a nuclear f ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
Interleukin (IL)33 is a new member of the IL1 superfamily of cytokines that is expressed by mainly stromal cells, such as epithelial and endothelial cells, and its expression is upregulated following proinflammatory stimulation. IL33 can function both as a traditional cytokine and as a nuclear factor regulating gene transcription. It is thought to function as an ‘alarmin ’ released following cell necrosis to alerting the immune system to tissue damage or stress. It mediates its biological effects via interaction with the receptors ST2 (IL1RL1) and IL1 receptor accessory protein (IL1RAcP), both of which are widely expressed, particularly by innate immune cells and T helper 2 (Th2) cells. IL33 strongly induces Th2 cytokine production from these cells and can promote the pathogenesis of Th2related disease such as asthma, atopic dermatitis and anaphylaxis. However, IL33 has shown various protective effects in cardiovascular diseases such as atherosclerosis, obesity, type 2 diabetes and cardiac remodeling. Thus, the effects of IL33 are either pro or antiinflammatory depending on the disease and the model. In this review the role of IL33 in the inflammation of several disease pathologies will be discussed, with particular emphasis on recent advances.
The dynamics of legged locomotion: Models, analyses, and challenges
 SIAM Review
, 2006
"... Cheetahs and beetles run, dolphins and salmon swim, and bees and birds fly with grace and economy surpassing our technology. Evolution has shaped the breathtaking abilities of animals, leaving us the challenge of reconstructing their targets of control and mechanisms of dexterity. In this review we ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
Cheetahs and beetles run, dolphins and salmon swim, and bees and birds fly with grace and economy surpassing our technology. Evolution has shaped the breathtaking abilities of animals, leaving us the challenge of reconstructing their targets of control and mechanisms of dexterity. In this review we explore a corner of this fascinating world. We describe mathematical models for legged animal locomotion, focusing on rapidly running insects, and highlighting achievements and challenges that remain. Newtonian bodylimb dynamics are most naturally formulated as piecewiseholonomic rigid body mechanical systems, whose constraints change as legs touch down or lift off. Central pattern generators and proprioceptive sensing require models of spiking neurons, and simplified phase oscillator descriptions of ensembles of them. A full neuromechanical model of a running animal requires integration of these elements, along with proprioceptive feedback and models of goaloriented sensing, planning and learning. We outline relevant background material from neurobiology and biomechanics, explain key properties of the hybrid dynamical systems that 1 underlie legged locomotion models, and provide numerous examples of such models, from the simplest, completely soluble ‘pegleg walker ’ to complex neuromuscular subsystems that are yet to be assembled into models of behaving animals. 1
Advancing the Boundaries of HighConnectivity Network Simulation with Distributed Computing
, 2005
"... The availability of efficient and reliable simulation tools is one of the missioncritical technologies in the fastmoving field of computational neuroscience. Research indicates that higher brain functions emerge from large and complex cortical networks and their interactions. The large number of e ..."
Abstract

Cited by 32 (12 self)
 Add to MetaCart
The availability of efficient and reliable simulation tools is one of the missioncritical technologies in the fastmoving field of computational neuroscience. Research indicates that higher brain functions emerge from large and complex cortical networks and their interactions. The large number of elements (neurons) combined with the high connectivity (synapses) of the biological network and the specific type of interactions impose severe constraints on the explorable system size that previously have been hard to overcome. Here we present a collection of new techniques combined to a coherent simulation tool removing the fundamental obstacle in the computational study of biological neural networks: the enormous number of synaptic contacts per neuron. Distributing an individual simulation over multiple computers enables the investigation of networks orders of magnitude larger than previously possible. The
Commoninput models for multiple neural spiketrain data
 Data, Network: Comput. Neural Syst
, 2006
"... Recent developments in multielectrode recordings enable the simultaneous measurement of the spiking activity of many neurons. Analysis of such multineuronal data is one of the key challenges in computational neuroscience today. In this work, we develop a multivariate pointprocess model in which th ..."
Abstract

Cited by 30 (17 self)
 Add to MetaCart
Recent developments in multielectrode recordings enable the simultaneous measurement of the spiking activity of many neurons. Analysis of such multineuronal data is one of the key challenges in computational neuroscience today. In this work, we develop a multivariate pointprocess model in which the observed activity of a network of neurons depends on three terms: 1) the experimentallycontrolled stimulus; 2) the spiking history of the observed neurons; and 3) a latent noise source that corresponds, for example, to “common input ” from an unobserved population of neurons that is presynaptic to two or more cells in the observed population. We develop an expectationmaximization algorithm for fitting the model parameters; here the expectation step is based on a continuoustime implementation of the extended Kalman smoother, and the maximization step involves two concave maximization problems which may be solved in parallel. The techniques developed allow us to solve a variety of inference problems in a straightforward, computationally efficient fashion; for example, we may use the model to predict network activity given an arbitrary stimulus, infer a neuron’s firing rate given the stimulus and the activity of the other observed neurons, and perform optimal stimulus decoding and prediction. We present several detailed simulation studies which explore the strengths and limitations of our approach. 1
Sparse coding via thresholding and local competition in neural circuits
"... While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted com ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted combination of meansquared error (MSE) and a coefficient cost function. LCAs are designed to be implemented in a dynamical system composed of many neuronlike elements operating in parallel. These algorithms use thresholding functions to induce local (usually oneway) inhibitory competitions between nodes to produce sparse representations. LCAs produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation. Additionally, LCA coefficients for video sequences demonstrate inertial properties that are both qualitatively and quantitatively more regular (i.e., smoother and more predictable) than the coefficients produced by greedy algorithms. 1
Reinforcement Learning by Policy Search
, 2000
"... One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations could be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are know ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations could be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. Reinforcement learning means learning a policya mapping of observations into actionsbased on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies being searched is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate various architectures for controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multiagent system. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience reuse. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.