Results 1  10
of
30
A neurobiological theory of meaning in perception. Part 1. Information and meaning in nonconvergent and nonlocal brain dynamics
 Int. J. Bifurc. Chaos
, 2003
"... Synchrony among multicortical EEGs 2 Freeman, Gaál & Jörnsten Information transfer and integration among functionally distinct areas of cerebral cortex of oscillatory activity requires some degree of phase synchrony of the trains of action potentials that carry the information prior to the integrati ..."
Abstract

Cited by 28 (14 self)
 Add to MetaCart
Synchrony among multicortical EEGs 2 Freeman, Gaál & Jörnsten Information transfer and integration among functionally distinct areas of cerebral cortex of oscillatory activity requires some degree of phase synchrony of the trains of action potentials that carry the information prior to the integration. However, propagation delays are obligatory. Delays vary with the lengths and conduction velocities of the axons carrying the information, causing phase dispersion. In order to determine how synchrony is achieved despite dispersion, we recorded EEG signals from multiple electrode arrays on five cortical areas in cats and rabbits, that had been trained to discriminate visual or auditory conditioned stimuli. Analysis by timelagged correlation, multiple correlation and PCA, showed that maximal correlation was at zero lag and averaged.7, indicating that 50 % of the power in the gamma range among the five areas was at zero lag irrespective of phase or frequency. There were no stimulusrelated episodes of transiently increased phase locking among the areas, nor EEG "bursts " of transiently increased amplitude above the sustained level of synchrony. Three operations were identified to account for the sustained correlation. Cortices broadcast their outputs over divergentconvergent axonal
Patterns of synchrony in neural networks with spike adaptation
 Neural Comp
, 2001
"... We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed i ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of integrateandfire neurons with spike adaptation. The dependence of this state on the different network parameters is investigated, and it is shown that this mechanism is robust against inhomogeneities, sparseness of the connectivity, and noise. In networks of two populations, one excitatory and one inhibitory, we show that decreasing the inhibitory feedback can cause the network to switch from a tonically active, asynchronous state to the synchronized bursting state. Finally, we show that the same mechanism also causes synchronized burst activity in networks of more realistic conductance based model neurons.
The Autapse: A Simple Illustration of ShortTerm Analog Memory Storage By Tuned Synaptic Feedback
, 2000
"... According to a popular hypothesis, shortterm memories are stored as persistent neural activity maintained by synaptic feedback loops. This hypothesis has been formulated mathematically in a number of recurrent network models. Here we study an abstraction of these models, a single neuron with a sy ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
According to a popular hypothesis, shortterm memories are stored as persistent neural activity maintained by synaptic feedback loops. This hypothesis has been formulated mathematically in a number of recurrent network models. Here we study an abstraction of these models, a single neuron with a synapse onto itself, or autapse. This abstraction cannot simulate the way in which persistent activity patterns are distributed over neural populations in the brain. However, with proper tuning of parameters, it does reproduce the continuously graded, or analog, nature of many examples of persistent activity. The conditions for tuning are derived for the dynamics of a conductancebased model neuron with a slow excitatory autapse. The derivation uses the method of averaging to approximate the spiking model with a nonspiking, reduced model. Shortterm analog memory storage is possible if the reduced model is approximately linear, and its feedforward bias and autapse strength are precisely...
Existence and Stability of Standing Pulses in Neural Networks
 I. Existence. SIAM Journal on Applied Dynamical Systems
, 2003
"... Abstract. We analyze the stability of standing pulse solutions of a neural network integrodifferential equation. The network consists of a coarsegrained layer of neurons synaptically connected by lateral inhibition with a nonsaturating nonlinear gain function. When two standing singlepulse soluti ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Abstract. We analyze the stability of standing pulse solutions of a neural network integrodifferential equation. The network consists of a coarsegrained layer of neurons synaptically connected by lateral inhibition with a nonsaturating nonlinear gain function. When two standing singlepulse solutions coexist, the small pulse is unstable, and the large pulse is stable. The large single pulse is bistable with the “alloff ” state. This bistable localized activity may have strong implications for the mechanism underlying working memory. We show that dimple pulses have similar stability properties to large pulses but double pulses are unstable.
Permitted and Forbidden Sets in Symmetric ThresholdLinear Networks
, 2003
"... The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about highlevel biological computation. In past theoretical studies, constraints on the synaptic connection patterns of thresholdlinear networks were found that guaranteed bounded netw ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about highlevel biological computation. In past theoretical studies, constraints on the synaptic connection patterns of thresholdlinear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric thresholdlinear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of longterm memory that is more general than the traditional perspective of fixedpoint attractor networks. There is a close correspondence between thresholdlinear networks and networks defined by the generalized LotkaVolterra equations.
Dynamics of Strongly Coupled Spiking Neurons
 Neural Computation
, 2000
"... We present a dynamical theory of integrateandfire neurons with strong synaptic coupling. We show how phaselocked states that are stable in the weak coupling regime can destabilize as the coupling is increased, leading to states characterized by spatiotemporal variations in the interspike interval ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
We present a dynamical theory of integrateandfire neurons with strong synaptic coupling. We show how phaselocked states that are stable in the weak coupling regime can destabilize as the coupling is increased, leading to states characterized by spatiotemporal variations in the interspike intervals (ISIs). The dynamics is compared with that of a corresponding network of analog neurons in which the outputs of the neurons are taken to be mean firing rates. A fundamental result is that for slow interactions, there is good agreement between the two models (on an appropriately defined timescale). Various examples of desynchronization in the strong coupling regime are presented. First, a globally coupled network of identical neurons with strong inhibitory coupling is shown to exhibit oscillator death in which some of the neurons suppress the activity of others. However, the stability of the synchronous state persists for very large networks and fast synapses. Second, an asymmetric network with a mixture of excitation and inhibition is shown to exhibit periodic bursting patterns. Finally, a onedimensional network of neurons with longrange interactions is shown to desynchronize to a state with a spatially periodic pattern of mean firing rates across the network. This is modulated by deterministic fluctuations of the instantaneous firing rate whose size is an increasing function of the speed of synaptic response. 1
From spiking neuron models to linearnonlinear models
 PLoS Comput. Biol
, 2011
"... Neurons transform timevarying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linearnonlinear (LN) cascade, in which the output firing rate i ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Neurons transform timevarying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linearnonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static nonlinear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the inputoutput mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linearnonlinear cascade. Here we investigate this question for the leaky integrateandfire (LIF), exponential integrateandfire (EIF) and conductancebased WangBuzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static nonlinearity in a parameterfree form. We show that the obtained functions are identical to the linear filter and static nonlinearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linearnonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and WangBuzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive
Contraction properties of vlsi cooperative competitive neural networks of spiking neurons
 Advances in Neural Information Processing Systems 20
, 2008
"... A non–linear dynamic system is called contracting if initial conditions are forgotten exponentially fast, so that all trajectories converge to a single trajectory. We use contraction theory to derive an upper bound for the strength of recurrent connections that guarantees contraction for complex neu ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
A non–linear dynamic system is called contracting if initial conditions are forgotten exponentially fast, so that all trajectories converge to a single trajectory. We use contraction theory to derive an upper bound for the strength of recurrent connections that guarantees contraction for complex neural networks. Specifically, we apply this theory to a special class of recurrent networks, often called Cooperative Competitive Networks (CCNs), which are an abstract representation of the cooperativecompetitive connectivity observed in cortex. This specific type of network is believed to play a major role in shaping cortical responses and selecting the relevant signal among distractors and noise. In this paper, we analyze contraction of combined CCNs of linear threshold units and verify the results of our analysis in a hybrid analog/digital VLSI CCN comprising spiking neurons and dynamic synapses. 1
2006 On the application of “equationfree” modelling to neural systems
 J. Comput. Neurosci
"... Abstract. “Equationfree modelling ” is a recentlydeveloped technique for bridging the gap between detailed, microscopic descriptions of systems and macroscopic descriptions of their collective behaviour. It uses short, repeated bursts of simulation of the microscopic dynamics to analyse the effect ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. “Equationfree modelling ” is a recentlydeveloped technique for bridging the gap between detailed, microscopic descriptions of systems and macroscopic descriptions of their collective behaviour. It uses short, repeated bursts of simulation of the microscopic dynamics to analyse the effective macroscopic equations, even though such equations are not directly available for evaluation. This paper demonstrates these techniques on a variety of networks of model neurons, and discusses the advantages and limitations of such an approach. New results include an understanding of the effects of including gap junctions in a model capable of sustaining spatially localised “bumps ” of activity, and an investigation of a network of coupled bursting neurons. 1.